Optical-based navigation systems are widely used in surgical interventions. However, despite their great utility and accuracy, they are expensive and require time and effort to setup for surgeries. Moreover, traditional navigation systems use 2D screens to display instrument positions causing the surgeons to look away from the operative field. Head mounted displays such as the Microsoft HoloLens may provide an attractive alternative for surgical navigation that also permits augmented reality visualization. The HoloLens is equipped with multiple sensors for tracking and scene understanding. Mono and stereo-vision in the HoloLens have been both reported to be used for marker tracking, but no extensive evaluation on accuracy has been done to compare the two approaches. The objective of our work is to investigate the tracking performance of various camera setups in the HoloLens, and to study the effect of the marker size, marker distance from camera, and camera resolution on marker locating accuracy. We also investigate the speed and stability of marker pose for each camera setup. The tracking approaches are evaluated using ArUco markers. Our results show that mono-vision is more accurate in marker locating than stereo-vision when high resolution is used. However, this comes at the expense of higher frame processing time. Alternatively, we propose a combined low-resolution mono-stereo tracking setup that outperforms each tracking approach individually and is comparable to high resolution mono tracking, with a mean translational error of 1.8±0.6mm for 10cm marker size at 50cm distance. We further discuss our findings and their implications for navigation in surgical interventions.
Analysis of longitudinal changes in imaging studies often involves both segmentation of structures of interest and registration of multiple timeframes. The accuracy of such analysis could benefit from a tailored framework that jointly optimizes both tasks to fully exploit the information available in the longitudinal data. Most learning- based registration algorithms, including joint optimization approaches, currently suffer from bias due to selection of a fixed reference frame and only support pairwise transformations. We here propose an analytical framework based on an unbiased learning strategy for group-wise registration that simultaneously registers images to the mean space of a group to obtain consistent segmentations. We evaluate the proposed method on longitudinal analysis of a white matter tract in a brain MRI dataset with 2-3 time-points for 3249 individuals, i.e., 8045 images in total. The reproducibility of the method is evaluated on test-retest data from 97 individuals. The results confirm that the implicit reference image is an average of the input image. In addition, the proposed framework leads to consistent segmentations and significantly lower processing bias than that of a pair-wise fixed-reference approach. This processing bias is even smaller than those obtained when translating segmentations by only one voxel, which can be attributed to subtle numerical instabilities and interpolation. Therefore, we postulate that the proposed mean-space learning strategy could be widely applied to learning-based registration tasks. In addition, this group-wise framework introduces a novel way for learning-based longitudinal studies by direct construction of an unbiased within-subject template and allowing reliable and efficient analysis of spatio-temporal imaging biomarkers.
We propose a consistent ultrasound volume stitching framework, with the intention to produce a volume with higher image quality and extended field-of-view in this work. Directly using pair-wise registrations for stitching may lead to geometric errors. Therefore, we propose an approach to improve the image alignment by optimizing a consistency metric over multiple pairwise registrations. In the optimization, we utilize transformed points to effectively compute a distance between rigid transformations. The method has been evaluated on synthetic, phantom and clinical data. The results indicate that our transformation optimization method is effective and our stitching framework has a good geometric precision. Also, the compound images have been demonstrated to have improved CNR values.
Advances in computer hard- and software have enabled the automated extraction of biomarkers from large scale imaging studies by means of image processing pipelines. For large cohort studies, ample storage- and computing resources are required: pipelines are typically executed in parallel on one or more High Performance Computing Clusters (HPC). As processing is distributed, it becomes more cumbersome to obtain detailed progress and status information of large-scale experiments. Especially in a research-oriented environment, where image processing pipelines are often in an experimental stage, debugging is a crucial part of the development process that relies heavily on a tight collaboration between pipeline developers and clinical researchers. Debugging a running pipeline is a challenging and time-consuming process for seasoned pipeline developers, and nearly impossible for clinical researchers, often involving parsing of complex logging systems and text files, and requires special knowledge of the HPC environment. In this paper, we present the Pipeline Inspection and Monitoring web application (PIM). The goal of PIM is to make it more straightforward and less time-consuming to inspect complex, long running image processing pipelines, irrespective of the level of technical expertise and the workflow engine. PIM provides an interactive, visualization-based web application to intuitively track progress, view pipeline structure and debug running image processing pipelines. The level of detail is fully customizable, supporting a wide variety of tasks (e.g. quick inspection and thorough debugging) and thereby facilitating both clinical researchers and pipeline developers in monitoring and debugging.
Correct diagnosis of the liver tumor phenotype is crucial for treatment planning, especially the distinction between malignant and benign lesions. Clinical practice includes manual scoring of the tumors on Magnetic Resonance (MR) images by a radiologist. As this is challenging and subjective, it is often followed by a biopsy. In this study, we propose a radiomics approach as an objective and non-invasive alternative for distinguishing between malignant and benign phenotypes. T2-weighted (T2w) MR sequences of 119 patients from multiple centers were collected. We developed an efficient semi-automatic segmentation method, which was used by a radiologist to delineate the tumors. Within these regions, features quantifying tumor shape, intensity, texture, heterogeneity and orientation were extracted. Patient characteristics and semantic features were added for a total of 424 features. Classification was performed using Support Vector Machines (SVMs). The performance was evaluated using internal random-split cross-validation. On the training set within each iteration, feature selection and hyperparameter optimization were performed. To this end, another cross validation was performed by splitting the training sets in training and validation parts. The optimal settings were evaluated on the independent test sets. Manual scoring by a radiologist was also performed. The radiomics approach resulted in 95% confidence intervals of the AUC of [0.75, 0.92], specificity [0.76, 0.96] and sensitivity [0.52, 0.82]. These approach the performance of the radiologist, which were an AUC of 0.93, specificity 0.70 and sensitivity 0.93. Hence, radiomics has the potential to predict the liver tumor benignity in an objective and non-invasive manner.
Multimodal groupwise registration has been of growing interest to the image processing community due to developments in scanner technologies (e.g. multiparametric MRI, DCE-CT or PET-MR) that increased both the number of modalities and number of images under consideration. In this work a novel methodology is presented for multimodal groupwise registration that is based on Laplacian eigenmaps, a nonlinear dimensionality reduction technique. Compared to recently proposed dissimilarity metrics based on principal component analysis, the proposed metric should enable a better capture of the intensity relationships between different images in the group. The metric is constructed to be the second smallest eigenvalue from the eigenvector problem defined in Laplacian eigenmaps. The method was validated in three distinct experiments: a non-linear synthetic registration experiment, the registration of quantitative MRI data of the carotid artery, and the registration of multimodal data of the brain (RIRE). The results show increased accuracy and robustness compared to other state-of-the-art groupwise registration methodologies.
Both normal aging and neurodegenerative diseases such as Alzheimer’s disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.
The apparent diffusion coefficient (ADC) is an imaging biomarker providing quantitative information on the diffusion of water in biological tissues. This measurement could be of relevance in oncology drug development, but it suffers from a lack of reliability. ADC images are computed by applying a voxelwise exponential fitting to multiple diffusion-weighted MR images (DW-MRIs) acquired with different diffusion gradients. In the abdomen, respiratory motion induces misalignments in the datasets, creating visible artefacts and inducing errors in the ADC maps. We propose a multistep post-acquisition motion compensation pipeline based on 3D non-rigid registrations. It corrects for motion within each image and brings all DW-MRIs to a common image space. The method is evaluated on 10 datasets of free-breathing abdominal DW-MRIs acquired from healthy volunteers. Regions of interest (ROIs) are segmented in the right part of the abdomen and measurements are compared in the three following cases: no image processing, Gaussian blurring of the raw DW-MRIs and registration. Results show that both blurring and registration improve the visual quality of ADC images, but compared to blurring, registration yields visually sharper images. Measurement uncertainty is reduced both by registration and blurring. For homogeneous ROIs, blurring and registration result in similar median ADCs, which are lower than without processing. In a ROI at the interface between liver and kidney, registration and blurring yield different median ADCs, suggesting that uncorrected motion introduces a bias. Our work indicates that averaging procedures on the scanner should be avoided, as they remove the opportunity to perform motion correction.
In this work, we investigate nonrigid motion compensation in simultaneously acquired (side-by-side) B-mode ultrasound (BMUS) and contrast enhanced ultrasound (CEUS) image sequences of the carotid artery. These images are acquired to study the presence of intraplaque neovascularization (IPN), which is a marker of plaque vulnerability. IPN quantification is visualized by performing the maximum intensity projection (MIP) on the CEUS image sequence over time. As carotid images contain considerable motion, accurate global nonrigid motion compensation (GNMC) is required prior to the MIP. Moreover, we demonstrate that an improved lumen and plaque differentiation can be obtained by averaging the motion compensated BMUS images over time. We propose to use a previously published 2D+t nonrigid registration method, which is based on minimization of pixel intensity variance over time, using a spatially and temporally smooth B-spline deformation model. The validation compares displacements of plaque points with manual trackings by 3 experts in 11 carotids. The average (± standard deviation) root mean square error (RMSE) was 99±74μm for longitudinal and 47±18μm for radial displacements. These results were comparable with the interobserver variability, and with results of a local rigid registration technique based on speckle tracking, which estimates motion in a single point, whereas our approach applies motion compensation to the entire image. In conclusion, we evaluated that the GNMC technique produces reliable results. Since this technique tracks global deformations, it can aid in the quantification of IPN and the delineation of lumen and plaque contours.
KEYWORDS: Wavelets, Image registration, Lung, Brain, Visual process modeling, Medical imaging, Magnetic resonance imaging, Electroluminescent displays, Data modeling, Neuroimaging
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
KEYWORDS: Magnetic resonance imaging, Stereolithography, Diffusion tensor imaging, Monte Carlo methods, Silicon, Radiology, Statistical analysis, Tissues, Anisotropy, Time metrology
For quantitative MRI techniques, such as T1, T2 mapping and Diffusion Tensor Imaging (DTI), a model has to be
fit to several MR images that are acquired with suitably chosen different acquisition settings. The most efficient
estimator to retrieve the parameters is the Maximum Likelihood (ML) estimator. However, the standard ML
estimator is biased for finite sample sizes. In this paper we derive a bias correction formula for magnitude MR
images. This correction is applied in two different simulation experiments, a T2 mapping experiment and a DTI
experiment. We show that the correction formula successfully removes the bias. As the correction is performed
as post-processing, it is possible to retrospectively correct the results of previous quantitative experiments. With
this procedure more accurate quantitative values can be obtained from quantitative MR acquisitions.
Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several
papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution and that brain tissue classification can be performed robustly for images obtained with different protocols, we propose to derive appearance features from brain-tissue density maps instead of directly from the MR images. We evaluated this approach on hippocampus segmentation in two sets of images acquired with substantially different imaging protocols and on different scanners. While a combination of conventional appearance features trained on data from a different scanner with multi-atlas segmentation performed poorly with an average Dice overlap of 0.698, the local appearance model based on the new acquisition-independent features significantly improved (0.783) over atlas-based segmentation alone (0.728).
KEYWORDS: Signal to noise ratio, Magnetic resonance imaging, Image resolution, Lawrencium, Super resolution, Image processing, Computer programming, Radiology, Scanners, Transform theory
Improving the resolution in magnetic resonance imaging (MRI) is always done at the expense of either the signal-to-noise
ratio (SNR) or the acquisition time. This study investigates whether so-called super-resolution reconstruction (SRR) is an
advantageous alternative to direct high-resolution (HR) acquisition in terms of the SNR and acquisition time trade-offs.
An experimental framework was designed to accommodate the comparison of SRR images with direct high-resolution
acquisitions with respect to these trade-offs. The framework consisted, on one side, of an image acquisition scheme,
based on theoretical relations between resolution, SNR, and acquisition time, and, on the other side, of a protocol for
reconstructing SRR images from a varying number of acquired low-resolution (LR) images. The quantitative experiments
involved a physical phantom containing structures of known dimensions. Images reconstructed by three SRR methods, one
based on iterative back-projection and two on regularized least squares, were quantitatively and qualitatively compared
with direct HR acquisitions. To visually validate the quantitative evaluations, qualitative experiments were performed, in
which images of three different subjects (a phantom, an ex-vivo rat knee, and a post-mortem mouse) were acquired with
different MRI scanners. The quantitative results indicate that for long acquisition times, when multiple acquisitions are
averaged to improve SNR, SRR can achieve better resolution at better SNR than direct HR acquisitions.
Next to aneurysm size, aneurysm growth over time is an important indicator for aneurysm rupture risk. Manual
assessment of aneurysm growth is a cumbersome procedure, prone to inter-observer and intra-observer variability. In
clinical practice, mainly qualitative assessment and/or diameter measurement are routinely performed. In this paper a
semi-automated method for quantifying aneurysm volume growth over time in CTA data is presented. The method treats
a series of longitudinal images as a 4D dataset. Using a 4D groupwise non-rigid registration method, deformations with
respect to the baseline scan are determined. Combined with 3D aneurysm segmentation in the baseline scan, volume
change is assessed using the deformation field at the aneurysm wall. For ten patients, the results of the method are
compared with reports from expert clinicians, showing that the quantitative results of the method are in line with the
assessment in the radiology reports. The method is also compared to an alternative method in which the volume is
segmented in each 3D scan individually, showing that the 4D groupwise registration method agrees better with manual
assessment.
KEYWORDS: Image segmentation, Arteries, Image classification, Calcium, Tissues, Magnetic resonance imaging, 3D modeling, 3D image processing, Signal attenuation, Health informatics
This paper presents a level set based method for segmenting the outer vessel wall and plaque components of the carotid
artery in CTA. The method employs a GentleBoost classification framework that classifies pixels as calcified region or
not, and inside or outside the vessel wall. The combined result of both classifications is used to construct a speed
function for level set based segmentation of the outer vessel wall; the segmented lumen is used to initialize the level set.
The method has been optimized on 20 datasets and evaluated on 80 datasets for which manually annotated data was
available as reference. The average Dice similarity of the outer vessel wall segmentation was 92%, which compares
favorably to previous methods.
We propose a minimum cost path approach to track the centerlines of the internal and external carotid arteries in
multispectral MR data. User interaction is limited to the annotation of three seed points. The cost image is based
on both a measure of vessel medialness and lumen intensity similarity in two MRA image sequences: Black Blood
MRA and Phase Contrast MRA. After intensity inhomogeneity correction and noise reduction, the two images are
aligned using affine registration. The two parameters that control the contrast of the cost image were determined
in an optimization experiment on 40 training datasets. Experiments on the training datasets also showed that a cost
image composed of a combination of gradient-based medialness and lumen intensity similarity increases the tracking
accuracy compared to using only one of the constituents. Furthermore, centerline tracking using both MRA sequences
outperformed tracking using only one of these MRA images. An independent test set of 152 images from 38 patients
served to validate the technique. The centerlines of 148 images were successfully extracted using the parameters
optimized on the training sets. The average mean distance to the reference standard, manually annotated centerlines,
was 0.98 mm, which is comparable to the in-plane resolution. This indicates that the proposed method has a high
potential to replace the manual centerline annotation.
KEYWORDS: Tumors, Magnetic resonance imaging, In vivo imaging, Image registration, 3D image processing, Temporal resolution, Tissues, Image segmentation, 3D modeling, 3D image enhancement
Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is becoming an indispensable tool to non-invasively
study tumor characteristics. However, many different DCE-analysis methods are currently being used. To
compare and validate different methods, histology is the gold standard. For this purpose, exact co-localization between
histology and MRI images is a prerequisite. In this study a methodology is developed to validate DCE-data with
histology with an emphasis on correct registration of DCE-MRI and histological data. A pancreatic tumor was grown in
a rat model. The tumor was dissected after MR imaging, embedded in paraffin, and cut into thin slices. These slices were
stained with haematoxylin and eosin, digitized and stacked in a 3D volume. Next, the 3D histology was registered to ex-vivo
SWI-weighted MR images, which in turn were registered to in-vivo SWI and DCE images to achieve correct co-localization.
Semi-quantitative and quantitative parameters were calculated. Preliminary results suggest that both
pharmacokinetic and heuristic DCE-parameters can discriminate between vital and non-vital tumor regions. The
developed method offers the basis for an accurate spatial correlation between DCE-MRI derived parametric maps and
histology, and facilitates the evaluation of different DCE-MRI analysis methods.
Automatic quantification of carotid artery plaque composition is important in the development of methods that
distinguish vulnerable from stable plaques. MRI has shown to be capable of imaging different components noninvasively.
We present a new plaque classification method which uses 3D registration of histology data with ex vivo
MRI data, using non-rigid registration, both for training and evaluation. This is more objective than previously presented
methods, as it eliminates selection bias that is introduced when 2D MRI slices are manually matched to histological
slices before evaluation.
Histological slices of human atherosclerotic plaques were manually segmented into necrotic core, fibrous tissue and
calcification. Classification of these three components was voxelwise evaluated. As features the intensity, gradient
magnitude and Laplacian in four MRI sequences after different degrees of Gaussian smoothing, and the distances to the
lumen and the outer vessel wall, were used. Performance of linear and quadratic discriminant classifiers for different
combinations of features was evaluated. Best accuracy (72.5 ± 7.7%) was reached with the linear classifier when all
features were used. Although this was only a minor improvement to the accuracy of a classifier that only included the
intensities and distance features (71.6 ± 7.9%), the difference was statistically significant (paired t-test, p<0.05). Good
sensitivity and specificity for calcification was reached (83% and 95% respectively), however, differentiation between
fibrous (sensitivity 85%, specificity 60%) and necrotic tissue (sensitivity 49%, specificity 89%) was more difficult.
Cardiac magnetic resonance perfusion imaging (CMR) and computed tomography angiography (CTA) are widely used to
assess heart disease. CMR is used to measure the global and regional myocardial function and to evaluate the presence of
ischemia; CTA is used for diagnosing coronary artery disease, such as coronary stenoses. Nowadays, the hemodynamic
significance of coronary artery stenoses is determined subjectively by combining information on myocardial function with
assumptions on coronary artery territories. As the anatomy of coronary arteries varies greatly between individuals, we
developed a patient-specific tool for relating CTA and perfusion CMR data. The anatomical and functional information
extracted from CTA and CMR data are combined into a single frame of reference. Our graphical user interface provides
various options for visualization. In addition to the standard perfusion Bull's Eye Plot (BEP), it is possible to overlay a 2D
projection of the coronary tree on the BEP, to add a 3D coronary tree model and to add a 3D heart model. The perfusion
BEP, the 3D-models and the CTA data are also interactively linked. Using the CMR and CTA data of 14 patients, our
tool directly established a spatial correspondence between diseased coronary artery segments and myocardial regions with
abnormal perfusion. The location of coronary stenoses and perfusion abnormalities were visualized jointly in 3D, thereby
facilitating the study of the relationship between the anatomic causes of a blocked artery and the physiological effects on
the myocardial perfusion. This tool is expected to improve diagnosis and therapy planning of early-stage coronary artery
disease.
Computed tomography angiography (CTA), a non-invasive imaging technique, is becoming increasingly popular for cardiac
examination, mainly due to its superior spatial resolution compared to MRI. This imaging modality is currently widely
used for the diagnosis of coronary artery disease (CAD) but it is not commonly used for the diagnosis of ventricular and
atrial function. In this paper, we present a fully automatic method for segmenting the whole heart (i.e. the outer surface of
the myocardium) and cardiac chambers from CTA datasets. Cardiac chamber segmentation is particularly valuable for the
extraction of ventricular and atrial functional information, such as stroke volume and ejection fraction. With our approach,
we aim to improve the diagnosis of CAD by providing functional information extracted from the same CTA data, thus not
requiring additional scanning. In addition, the whole heart segmentation method we propose can be used for visualization
of the coronary arteries and for obtaining a region of interest for subsequent segmentation of the coronaries, ventricles and
atria. Our approach is based on multi-atlas segmentation, and performed within a non-rigid registration framework. A
leave-one-out quantitative validation was carried out on 8 images. The method showed a high accuracy, which is reflected
in both a mean segmentation error of 1.05±1.30 mm and an average Dice coefficient of 0.93. The robustness of the method
is demonstrated by successfully applying the method to 243 additional datasets, without any significant failure.
Accurately quantifying aneurysm shape parameters is of clinical importance, as it is an important factor in choosing the
right treatment modality (i.e. coiling or clipping), in predicting rupture risk and operative risk and for pre-surgical
planning. The first step in aneurysm quantification is to segment it from other structures that are present in the image. As
manual segmentation is a tedious procedure and prone to inter- and intra-observer variability, there is a need for an
automated method which is accurate and reproducible. In this paper a novel semi-automated method for segmenting
aneurysms in Computed Tomography Angiography (CTA) data based on Geodesic Active Contours is presented and
quantitatively evaluated. Three different image features are used to steer the level set to the boundary of the aneurysm,
namely intensity, gradient magnitude and variance in intensity. The method requires minimum user interaction, i.e.
clicking a single seed point inside the aneurysm which is used to estimate the vessel intensity distribution and to
initialize the level set. The results show that the developed method is reproducible, and performs in the range of interobserver
variability in terms of accuracy.
In this paper we address the problem of 3D shape reconstruction from sparse X-ray projections. We present a correspondence
free method to fit a statistical shape model to two X-ray projections, and illustrate its performance in 3D shape
reconstruction of the femur. The method alternates between 2D segmentation and 3D shaoe reconstruction, where 2D
segmentation is guided by dynamic programming along the model projection on the X-ray plane. 3D reconstruction is
based on the iterative minimization of the 3D distance between a set of support points and the back-projected silhouette
with respect to the pose and model parameters. We show robustness of the reconstruction on simulated X-ray projection data of the femur, varying the field of view; and in a pilot study on cadaveric femora.
Lejla Alic, Joost Haeck, Stefan Klein, Karin Bol, Sandra van Tiel, Piotr Wielopolski, Magda Bijster, Wiro Niessen, Monique Bernsen, Jifke Veenland, Marion de Jong
Spatial correspondence between histology and multi sequence MRI can provide information about the capabilities of
non-invasive imaging to characterize cancerous tissue. However, shrinkage and deformation occurring during the
excision of the tumor and the histological processing complicate the co registration of MR images with histological
sections. This work proposes a methodology to establish a detailed 3D relation between histology sections and in vivo
MRI tumor data. The key features of the methodology are a very dense histological sampling (up to 100 histology slices
per tumor), mutual information based non-rigid B-spline registration, the utilization of the whole 3D data sets, and the
exploitation of an intermediate ex vivo MRI.
In this proof of concept paper, the methodology was applied to one tumor. We found that, after registration, the visual
alignment of tumor borders and internal structures was fairly accurate. Utilizing the intermediate ex vivo MRI, it was
possible to account for changes caused by the excision of the tumor: we observed a tumor expansion of 20%. Also the
effects of fixation, dehydration and histological sectioning could be determined: 26% shrinkage of the tumor was found.
The annotation of viable tissue, performed in histology and transformed to the in vivo MRI, matched clearly with high
intensity regions in MRI. With this methodology, histological annotation can be directly related to the corresponding in
vivo MRI. This is a vital step for the evaluation of the feasibility of multi-spectral MRI to depict histological groundtruth.
In this paper a method to remove the divergence from a vector field is presented. When applied to a displacement field, this will remove all local compression and expansion. The method can be used as a post-processing step for (unconstrained) registered images, when volume changes in the deformation field are undesired. The method involves solving Poisson's equation for a large system. Algorithms to solve such systems include Fourier analysis and Cyclic Reduction. These solvers are vastly applied in the field of fluid dynamics, to compensate for numerical errors in calculated velocity fields. The application to medical image registration as described in this paper, has to our knowledge not been done before. To show the effect of the method, it is applied to the registration of both synthetic data and dynamic MR series of the liver. The results show that the divergence in the displacement field can be reduced by a factor of 10-1000 and that the accuracy of the registration increases.
A novel 2D slice based automatic method for model based segmentation of the outer vessel wall of the common carotid artery in CTA data set is introduced. The method utilizes a lumen segmentation and AdaBoost, a fast and robust machine learning algorithm, to initially classify (mark) regions outside and inside the vessel wall using the distance from the lumen and intensity profiles sampled radially from the gravity center of the lumen. A similar method using the distance from the lumen and the image intensity as features is used to classify calcium regions. Subsequently, an ellipse shaped deformable model is fitted to the classification result. The method has achieved smaller detection error than the inter observer variability, and the method is robust against variation of the training data sets.
It is still unclear whether periventricular and subcortical white matter lesions (WMLs) differ in etiology or clinical
consequences. Studies addressing this issue would benefit from automated segmentation and localization
of WMLs. Several papers have been published on WML segmentation in MR images. Automated localization
however, has not been investigated as much. This work presents and evaluates a novel method to label segmented
WMLs as periventricular and subcortical.
The proposed technique combines tissue classification and registration-based segmentation to outline the ventricles
in MRI brain data. The segmented lesions can then be labeled into periventricular WMLs and subcortical
WMLs by applying region growing and morphological operations.
The technique was tested on scans of 20 elderly subjects in which neuro-anatomy experts manually segmented
WMLs. Localization accuracy was evaluated by comparing the results of the automated method with a manual
localization. Similarity indices and volumetric intraclass correlations between the automated and the manual
localization were 0.89 and 0.95 for periventricular WMLs and 0.64 and 0.89 for subcortical WMLs, respectively.
We conclude that this automated method for WML localization performs well to excellent in comparison to the
gold standard.
An automatic method is presented to segment the internal carotid arteries through the difficult part of the skull
base in CT angiography. The method uses the entropy per slice to select a cross sectional plane below the skull
base. In this plane 2D circular structures are detected by the Hough transform. The center points are used to
initialize a level set which evolves with a prior shape constraint on its topology. In contrast with some related
vessel segmentation methods, our approach does not require the acquisition of an additional CT scan for bone
masking. Experiments on twenty internal carotids in ten patients show that 19 seed points are correctly identified
(95%) and 18 carotids (90%) are successfully segmented without any human interaction.
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue, requires laborious training on manually labeled subjects. In this work, the performance of kNN-based segmentation of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) using manual training is compared with a new method, in which training is automated using an atlas. From 12 subjects, standard T2 and PD scans and a high-resolution, high-contrast scan (Siemens T1-weighted HASTE sequence with reverse contrast) were used as feature sets. For the conventional kNN method, manual segmentations were used for training, and classifications were evaluated in a leave-one-out study. The performance as a function of the number of samples per tissue, and k was studied. For fully automated training, scans were registered to a probabilistic brain atlas. Initial training samples were randomly selected per tissue based on a threshold on the tissue probability. These initials were processed to keep the most reliable samples. Performance of the method for varying the threshold on the tissue probability method was studied. By measuring the percentage overlap (SI), classification results of both methods were validated. For conventional kNN classification, varying the number of training samples did not result in significant differences, while increasing k gave significantly better results. In the method using automated training, there is an overestimation of GM at the expense of CSF at higher thresholds on the tissue probability maps. The difference between the conventional method (k=45) and the observers was not significantly larger than inter-observer variability for all tissue types. The automated method performed slightly worse and performed equal to the observers for WM, and less for CSF and GM. From these results it can be concluded that conventional kNN classification may replace manual segmentation, and that atlas-based kNN segmentation has strong potential for fully automated segmentation, without the need of laborious manual training.
Segmentation of the left myocardium in four-dimensional (space-time)
cardiac MRI data sets is a prerequisite of many diagnostic tasks.
We propose a fully automatic method based on global minimization of an
energy functional by means of the graphcut algorithm.
Starting from automatically obtained segmentations of the left and
right ventricles and a cardiac region of interest, a spatial model is
constructed using simple and plausible assumptions.
This model is used to learn the appearance of different tissue types
by non parametric robust estimation.
Our method does not require previously trained shape or appearance
models. Processing takes 30-40s on current hardware.
We evaluated our method on 11 clinical cardiac MRI data sets acquired
using cine balanced fast field echo. Linear regression of the
automatically segmented myocardium volume against manual segmentations
(performed by a radiologist) showed an RMS error of about 12ml.
The automatic segmentation of the heart's two ventricles from dynamic
("cine") cardiac anatomical images, such as 3D+time short-axis MRI, is of significant clinical importance. Previously published automated
methods have various disadvantages for routine clinical use. This work reports about a novel automatic segmentation method that is very fast, and robust against anatomical variability and image contrast variations. The method is mostly image-driven: it fully exploits the information provided by modern 4D (3D+time) balanced Fast Field Echo (bFFE) cardiac anatomical MRI, and makes only few and plausible assumptions about the images and the imaged heart. Specifically, the method does not need any geometrical shape models nor complex gray-level appearance models. The method simply uses the two ventricles' contraction-expansion cycle, as well as the ventricles' spatial coherence along the time dimension. The performance of the cardiac ventricles segmentation method was demonstrated through a qualitative visual validation on 32 clinical exams: no gross failures for the left-ventricle (right-ventricle) on 32 (30) of the exams were found. Also, a clinical validation of resulting quantitative cardiac functional parameters was performed against a manual quantification of 18 exams; the automatically computed Ejection Fraction (EF) correlated well to the manually computed one: linear regression with RMS=3.7% (RMS expressed in EF units).
A technique is presented for segmentation and quantification of stenosed internal carotid arteries in three-dimensional contrast-enhanced magnetic resonance angiography. Segmentation with sub-voxel accuracy of the internal carotid arteries (ICAs) has been achieved via level-set techniques in which the central axis serves as initialization. The central axis is determined with minmal user-interaction, viz. two user-defined points. Quantification is performed by measuring the cross-sectional area in the stenosis and at a reference segment in planes perpendicular to the central axis. The technique was applied to 52 ICAs. It is demonstrated that the method's reproducibility is better than the intra-observer agreement. Furthermore, the agreement between the presented method and the observers is better than the inter-observer agreement.
An extension to level set based segmentation is proposed for vascular tree delineation. The method starts with topology extraction, by a shape constrained level set evolution steered by a strictly positive, image base speed function to ensure some oversegmentation. Next, the skeleton of the resulting oversegmentation is determined, which then is used to initialise another level set steered by a speed function with both negative and positive speed forces based on image features, to obtain a most accurate segmentation. The novelty of our approach lies in the shape constraint that is imposed implicitly on the first level set evolution. We apply repeatedly re-initializations of this evolution with a topology preserving skeleton of the current zero level set. We compare this method with a plain level set evolution steered by the same full range speed function. Both are initialised by placing a single seed point at the root of the vessel tree. Pilot experiments on twelve multislice CT data sets of the Circle of Willis show that our method is capable of segmenting the smaller branches at the distal part of the vessel tree structures and has the potential to segment vessels which are distal to a severe stenosis or occlusion.
KEYWORDS: Heart, 3D image processing, 3D modeling, 3D image reconstruction, Calibration, Motion models, 3D acquisition, Angiography, Animal model studies, Data modeling
3D rotational coronary angiography (3DRCA) is one of the
application areas of 3D rotational X-Ray imaging. In this
application a sequence of projection images is acquired when the
C-arm is rotated around the patient. Since the heart is a moving
object, only projections can be used which correspond to the same
phase of the cardiac cycle. This significantly limits the number
of projections available for reconstruction causing streaking
artefacts in the reconstructed image due to angular undersampling.
The involvement of additional projections in the reconstruction
procedure from different viewing angles would increase the quality
of the volume data. Each successive acquired projection is
slightly different compared with the previous one due to two
reasons: First, there is a motion to the deformation of the heart,
second there is an induced deformation owing to the change in the
projection angle. The purpose of this work is to determine the
motion owing to the heart deformation, so as to compensate for
this motion in projection images in a different heart phase.
Hereto we propose to use concepts from coronary modeling in
combination of conventional reconstruction procedures. The
proposed method facilitates the use of additional projections in
the reconstruction. Motion-compensated reconstructed volume data
are presented for coronary arteries in an animal (pig) model.
KEYWORDS: 3D modeling, Calibration, 3D image processing, Arteries, Data acquisition, Angiography, Data modeling, Image segmentation, X-rays, Systems modeling
For the diagnosis of ischemic heart disease, accurate quantitative analysis of the coronary arteries is important. In coronary angiography, a number of projections is acquired from which 3D models of the coronaries can be reconstructed. A signifcant limitation of the current 3D modeling procedures is the required user interaction
for defining the centerlines of the vessel structures in the 2D projections. Currently, the 3D centerlines of the coronary tree structure are calculated based on the interactively determined centerlines in two projections. For every interactively selected centerline point in a first projection the corresponding point in a second projection has to be determined interactively by the user. The correspondence is obtained based on the epipolar-geometry. In this paper a method is proposed to retrieve all the information required for the modeling procedure, by the interactive determination of the 2D centerline-points in only one projection. For every determined 2D centerline-point the corresponding 3D centerline-point is calculated by the analysis of the 1D gray value functions of the corresponding epipolarlines in space for all available 2D projections. This information is then used to build a 3D representation of the coronary arteries using coronary modeling techniques. The approach is illustrated on the analysis of calibrated phantom and calibrated coronary projection data.
KEYWORDS: 3D image processing, Image segmentation, 3D modeling, Visualization, 3D image reconstruction, 3D displays, Arteries, Optical tracking, Angiography, Sensors
A method is presented to track the guide wire during endovascular interventions and to visualize it in 3D, together with the vasculature of the patient. The guide wire is represented by a 3D spline whose position is optimized using internal and external forces. For the external forces, the 3D spline is projected onto the biplane projection images that are routinely acquired. Feature images are constructed based on the enhancement of line-like structures in the projection images. A threshold is applied to this image such that if the probability of a pixel to be part of the guide wire is sufficiently high this feature image is used, whereas outside this region a distance transform is computed to improve the capture range of the method. In preliminary experiments, it is shown that some of the problems of the 2D tracking which where presented in previous work can successfully be circumvented using the 3D tracking method.
A new method has been developed that, based on tracking a guide wire
in monoplane fluoroscopic images, visualizes the approximate guide
wire position in the 3D vasculature, that is obtained prior to the
intervention with 3D rotational X-ray angiography (3DRA). The
method consists of four stages: (i) tracking of the guide wire in 2D
fluoroscopic imaging, (ii) projecting the guide wire from the 2D
fluoroscopic image back into the 3DRA image to determine possible
locations of the guide wire in 3D, (iii) determining the approximate
guide wire location in the 3DRA image based on image features, and
(iv) visualization of the vessel and guide wire location found. The
method has been evaluated using a 3DRA image of a vascular phantom
filled with contrast, and monoplane fluoroscopic images of the same
phantom without contrast and with a guide wire inserted. Evaluation
has been performed for different projection angles. Also, several
feature images for finding the optimal guide wire position have been
compared. Average localization errors for the guide wire and the
guide wire tip are in the range of a few millimetres, which shows
that 3D visualization of the guide wire with respect to
the vasculature as a navigation tool in endovascular procedures is
feasible.
3D Rotational X-ray (3DRX) imaging can be used to intraoperatively
acquire 3D volumes depicting bone structures in the patient. Registration of 3DRX to MR images, containing soft tissue
information, facilitates image guided surgery on both soft tissue and
bone tissue information simultaneously. In this paper, automated noninvasive registration using maximization of mutual information is compared to conventional interactive and invasive point-based registration using the least squares fit of corresponding point sets. Both methods were evaluated on 3DRX images (with a resolution of 0.62x0.62x0.62 mm3) and MRI images (with resolutions of 2x2x2 mm3, 1.5x1.5x1.5 mm3 and 1x1x1 mm3) of seven defrosted spinal segments implanted with six or seven markers. The markers were used for the evaluation of the registration transformations found by both point- and maximization of mutual information based registration. The root-mean-squared-error on markers that were left out during registration was calculated after transforming the marker set with the computed registration transformation. The results show that the noninvasive registration method performs significantly better (p≤0.01) for all MRI resolutions than point-based registration using four or five markers, which is the number of markers conventionally used in image guided surgery systems.
KEYWORDS: Image segmentation, Data modeling, 3D modeling, Magnetic resonance imaging, Cardiovascular magnetic resonance imaging, Natural surfaces, Statistical modeling, Medical imaging, Eye models, Binary data
Cardiac MRI has improved the diagnosis of cardiovascular diseases by enabling the quantitative assessment of functional parameters. This requires an accurate identification of the myocardium of the left ventricle. This paper describes a novel segmentation technique for automated delineation of the myocardium. We propose to use prior knowledge by integrating a statistical shape model and a spatially varying feature model into a deformable mesh adaptation framework. Our shape model consists of a coupled, layered triangular mesh of the epi- and endocardium. It is adapted to the image by iteratively carrying out i) a surface detection and ii) a mesh reconfiguration by energy minimization. For surface detection a feature search is performed to find the point with the best feature combination. To accommodate the different tissue types the triangles of the mesh are labeled, resulting in a spatially varying feature model. The energy function consists of two terms: an external energy term, which attracts the triangles towards the features, and an internal energy term, which preserves the shape of the mesh. We applied our method to 40 cardiac MRI data sets (FFE-EPI) and compared the results to manual segmentations. A mean distance of about 3 mm with a standard deviation of 2 mm to the manual segmentations was achieved.
Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets.
In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the
boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation.
A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.
An automated method for the segmentation of thrombus in abdominal aortic aneurysms from CTA data is presented. The method is based on Active Shape Model (ASM) fitting in sequential slices, using the contour obtained in one slice as the initialisation in the adjacent slice. The optimal fit is defined by maximum correlation of grey value profiles around the contour in successive slices, in contrast to the original ASM scheme as proposed by Cootes and Taylor, where the correlation with profiles from training data is maximised. An extension to the proposed approach prevents the inclusion of low-intensity tissue and allows the model to refine to nearby edges. The applied shape models contain either one or two image slices, the latter explicitly restricting the shape change from slice to slice. To evaluate the proposed methods a leave-one-out experiment was performed, using six datasets containing 274 slices to segment. Both adapted ASM schemes yield significantly better results than the original scheme (p<0.0001). The extended slice correlation fit of a one-slice model showed best overall performance. Using one manually delineated image slice as a reference, on average a number of 29 slices could be automatically segmented with an accuracy within the bounds of manual inter-observer variability.
Blood pool agents (BPAs) for contrast-enhanced magnetic resonance angiography (CE-MRA) allow prolonged imaging times for higher contrast and resolution by imaging during the steady-state when the contrast agent is distributed through the complete vascular system. However, simultaneous venous and arterial enhancement hampers interpretation. It is shown that arterial and venous segmentation in this equilibrium phase can be achieved if the central arterial axis (CAA) and central venous axis (CVA) are known. Since the CAA can not straightforwardly be obtained from the steady-state data, images acquired during the first-pass of the contrast agent can be utilized to determine the CAA with minimal user initialization. Utilizing the CAA to provide a rough arterial segmentation, the CVA can subsequently be determined from the steady-state dataset. The final segmentations of the arteries and veins are achieved by simultaneously evolving two level-sets in the steady-state dataset starting from the CAA and CVA.
A semi-automatic method for localisation and segmentation of bifurcated aortic endografts in CTA images is presented. The graft position is established through detection of radiopaque markers sewn on the outside of the graft. The user indicates the first and the last marker, whereupon the rest of the markers are detected automatically by second order scaled derivative analysis combined with prior knowledge of graft shape and marker configuration. The marker centres obtained approximate the graft sides and central axis. The graft boundary is determined, either in the original CT slices or in reformatted slices orthogonal to the local graft axis, by maximizing the local gradient in the radial direction along a deformable contour passing through both sides. The method has been applied to ten CTA images. In all cases, an adequate segmentation is obtained. Compared to manual segmentations an average similarity (i.e. relative volume of overlap) of 0.93 +/- 0.02 for the graft body and 0.84 +/- 0.05 for the limbs is found.
A semi-automatic segmentation method for Tuberous Sclerosis (TS) lesions in the brain has been developed. Both T1 images and Fluid Attenuated Inversion Recovery (FLAIR) images are integrated in the segmentation procedure. The segmentation procedure is mainly based on the notion of fuzzy connectedness. This approach uses the two basic concepts of adjacency and affinity to form a fuzzy relation between voxels in the image. The affinity is defined using two quantities that are both based on characteristics of the intensities in the lesion and surrounding brain tissue (grey and white matter). The semi-automatic method has been compared to results of manual segmentation. Manual segmentation is prone to interobserver and intraobserver variability. This was especially true for this particular study, where large variations were observed, which implies that a golden standard for comparison was not available. The method did perform within the variability of the observers and therefore has the potential to improve reproducibility of quantitative measurements.
This paper describes efforts to study the motion of the left ventricle (LV) over the entire heart cycle directly from 4D MRI-data. A set of robust spatiotemporal operators with both a spatial and a temporal degree of freedom is constructed to provide a complete spatiotemporal description at a certain fixed location. This information is used to track features that correspond to anatomical structures of the LV. In order to measure the change of physical quantities such as the curvedness of the endocardial wall, we need its position in consecutive frames. In the case that a flow field is known we can use spatiotemporally oriented filters. Using a generalization of the optic flow constraint equation, which allows for a divergence term in the flow field and therefore is particularly suited for MRI-data, we calculated the flow field under the assumption of normal flow. Examples are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.