Non-invasive measurement of knee implant loosening is important to provide a diagnostic tool for patients with recurrent complaints after a total knee arthroplasty (TKA). Displacement measurements are currently estimated between tibial implant and bone using a loading device, CT imaging and an advanced 3D image analysis workflow. However, user interaction is required within each step of this workflow, especially in the segmentation of implant and bone, increasing the complexity of this task and affecting its reproducibility. A deep learning-based segmentation model can alleviate the workload by increasing automation and reducing the variability of manual segmentation. In this work, we propose a segmentation algorithm for the tibial implant and tibial bone cortex. The automatically obtained segmentations are then introduced in the displacement calculation workflow and four displacement measurements are calculated, namely mean target registration error (mTRE), maximum total point motion (MTPM), magnitude of translation and rotation. Results show that the parameter distributions are similar to the manual approach, with intra-class correlation values ranging from 0.96 to 0.99 for the different displacement measurements. Moreover, the methodological error has a smaller or comparable distribution, showing the feasibility to increase automation in knee implant displacement assessment.
KEYWORDS: Magnetic resonance imaging, Data acquisition, Brain, Visualization, Neuroimaging, Image restoration, Medical image reconstruction, Inverse problem on medical image
In spite of its extensive adaptation in almost every medical diagnostic and examinatorial application, Magnetic Resonance Imaging (MRI) is still a slow imaging modality which limits its use for dynamic imaging. In recent years, Parallel Imaging (PI) and Compressed Sensing (CS) have been utilised to accelerate the MRI acquisition. In clinical settings, subsampling the k-space measurements during scanning time using Cartesian trajectories, such as rectilinear sampling, is currently the most conventional CS approach applied which however, is prone to producing aliased reconstructions. With the advent of the involvement of Deep Learning (DL) in accelerating the MRI, reconstructing faithful images from subsampled data became increasingly promising. Retrospectively applying a subsampling mask onto the k-space data is a way of simulating the accelerated acquisition of kspace data in real clinical setting. In this paper we compare and provide a review for the effect of applying either rectilinear or radial retrospective subsampling on the quality of the reconstructions outputted by trained deep neural networks. With the same choice of hyper-parameters we train and evaluate two distinct Recurrent Inference Machines (RIMs), one for each type of subsampling. The qualitative and quantitative results of our experiments indicate that the model trained on data with radial subsampling attains higher performance and learns to estimate reconstructions with higher fidelity paving the way for other DL approaches to involve radial subsampling.
Ductal Carcinoma in Situ (DCIS) constitutes 20–25% of all diagnosed breast cancers and is a well known potential precursor for invasive breast cancer.1 The gold standard method for diagnosing DCIS involves the detection of calcifications and abnormal cell proliferation in mammary ducts in Hematoxylin and Eosin (H&E) stained whole-slide images (WSIs). Automatic duct detection may facilitate this task as well as downstream applications that currently require tedious, manual annotation of ducts. Examples of these are grading of DCIS lesions2 and prediction of local recurrence of DCIS.3 Several methods have been developed for object detection in the field of deep learning. Such models are typically initialised using ImageNet transfer-learning features, as the limited availability of annotated medical images has hindered the creation of domain-specific encoders. Novel techniques such as self-supervised learning (SSL) promise to overcome this problem by utilising unlabelled data to learn feature encoders. SSL encoders trained on unlabelled ImageNet have demonstrated SSL’s capacity to produce meaningful representations, scoring higher than supervised features on the ImageNet 1% classification task.4 In the domain of histopathology, feature encoders (Histo encoders) have been developed.5, 6 In classification experiments with linear regression, frozen features of these encoders outperformed those of ImageNet encoders. However, when models initialised with histopathology and ImageNet encoders were fine-tuned on the same classification tasks, there were no differences in performance between the encoders.5, 6 Furthermore, the transferability of SSL encodings to object detection is poorly understood.4 These findings show that more research is needed to develop training strategies for SSL encoders that can enhance performance in relevant downstream tasks. In our study, we investigated whether current state-of-the-art SSL methods can provide model initialisations that outperform ImageNet pre-training on the task of duct detection in WSIs of breast tissue resections. We compared the performance of these SSL-based histopathology encodings (Histo-SSL) with ImageNet pre-training (supervised and self-supervised) and training from scratch. Additionally, we compared the performance of our Histo-SSL encodings with published Histo encoders by Ciga5 and Mormont6 on the same task.
The fovea is an important clinical landmark that is used as a reference for assessing various quantitative measures, such as central retinal thickness or drusen count. In this paper we propose a novel method for automatic detection of the foveal center in Optical Coherence Tomography (OCT) scans. Although the clinician will generally aim to center the OCT scan on the fovea, post-acquisition image processing will give a more accurate estimate of the true location of the foveal center. A Convolutional Neural Network (CNN) was trained on a set of 781 OCT scans that classifies each pixel in the OCT B-scan with a probability of belonging to the fovea. Dilated convolutions were used to obtain a large receptive field, while maintaining pixel-level accuracy. In order to train the network more effectively, negative patches were sampled selectively after each epoch. After CNN classification of the entire OCT volume, the predicted foveal center was chosen as the voxel with maximum output probability, after applying an optimized three-dimensional Gaussian blurring. We evaluate the performance of our method on a data set of 99 OCT scans presenting different stages of Age-related Macular Degeneration (AMD). The fovea was correctly detected in 96:9% of the cases, with a mean distance error of 73 μm(±112 μm). This result was comparable to the performance of a second human observer who obtained a mean distance error of 69 μm (±94 μm). Experiments showed that the proposed method is accurate and robust even in retinas heavily affected by pathology.
Age-related Macular Degeneration (AMD) is a common eye disorder with high prevalence in elderly people. The disease mainly affects the central part of the retina, and could ultimately lead to permanent vision loss. Optical Coherence Tomography (OCT) is becoming the standard imaging modality in diagnosis of AMD and the assessment of its progression. However, the evaluation of the obtained volumetric scan is time consuming, expensive and the signs of early AMD are easy to miss. In this paper we propose a classification method to automatically distinguish AMD patients from healthy subjects with high accuracy. The method is based on an unsupervised feature learning approach, and processes the complete image without the need for an accurate pre-segmentation of the retina. The method can be divided in two steps: an unsupervised clustering stage that extracts a set of small descriptive image patches from the training data, and a supervised training stage that uses these patches to create a patch occurrence histogram for every image on which a random forest classifier is trained. Experiments using 384 volume scans show that the proposed method is capable of identifying AMD patients with high accuracy, obtaining an area under the Receiver Operating Curve of 0:984. Our method allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms.
Detection of tuberculosis (TB) on chest radiographs (CXRs) is a hard problem. Therefore, to help radiologists or even take their place when they are not available, computer-aided detection (CAD) systems are being developed. In order to reach a performance comparable to that of human experts, the pattern recognition algorithms of these systems are typically trained on large CXR databases that have been manually annotated to indicate the abnormal lung regions. However, manually outlining those regions constitutes a time-consuming process that, besides, is prone to inconsistencies and errors introduced by interobserver variability and the absence of an external reference standard. In this paper, we investigate an alternative pattern classi cation method, namely multiple-instance learning (MIL), that does not require such detailed information for a CAD system to be trained. We have applied this alternative approach to a CAD system aimed at detecting textural lesions associated with TB. Only the case (or image) condition (normal or abnormal) was provided in the training stage. We compared the resulting performance with those achieved by several variations of a conventional system trained with detailed annotations. A database of 917 CXRs was constructed for experimentation. It was divided into two roughly equal parts that were used as training and test sets. The area under the receiver operating characteristic curve was utilized as a performance measure. Our experiments show that, by applying the investigated MIL approach, comparable results as with the aforementioned conventional systems are obtained in most cases, without requiring condition information at the lesion level.
Age-related macular degeneration (AMD) is a degenerative disorder of the central part of the retina, which mainly affects older people and leads to permanent loss of vision in advanced stages of the disease. AMD grading of non-advanced AMD patients allows risk assessment for the development of advanced AMD and enables timely treatment of patients, to prevent vision loss. AMD grading is currently performed manually on color fundus images, which is time consuming and expensive. In this paper, we propose a supervised classification method to distinguish patients at high risk to develop advanced AMD from low risk patients and provide an exact AMD stage determination. The method is based on the analysis of the number and size of drusen on color fundus images, as drusen are the early characteristics of AMD. An automatic drusen detection algorithm is used to detect all drusen. A weighted histogram of the detected drusen is constructed to summarize the drusen extension and size and fed into a random forest classifier in order to separate low risk from high risk patients and to allow exact AMD stage determination. Experiments showed that the proposed method achieved similar performance as human observers in distinguishing low risk from high risk AMD patients, obtaining areas under the Receiver Operating Characteristic curve of 0.929 and 0.934. A weighted kappa agreement of 0.641 and 0.622 versus two observers were obtained for AMD stage evaluation. Our method allows for quick and reliable AMD staging at low costs.
Computer-aided Diagnosis (CAD) systems for the automatic identification of abnormalities in retinal images are
gaining importance in diabetic retinopathy screening programs. A huge amount of retinal images are collected
during these programs and they provide a starting point for the design of machine learning algorithms. However,
manual annotations of retinal images are scarce and expensive to obtain. This paper proposes a dynamic CAD
system based on active learning for the automatic identification of hard exudates, cotton wool spots and drusen
in retinal images. An uncertainty sampling method is applied to select samples that need to be labeled by an
expert from an unlabeled set of 4000 retinal images. It reduces the number of training samples needed to obtain
an optimum accuracy by dynamically selecting the most informative samples. Results show that the proposed
method increases the classification accuracy compared to alternative techniques, achieving an area under the
ROC curve of 0.87, 0.82 and 0.78 for the detection of hard exudates, cotton wool spots and drusen, respectively.
Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early
detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence
of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis
technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images
is a difficult task due to the wide variability that these images usually present in screening programs. We propose a
statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the
appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online
Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed
algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.