This paper addresses the localization of anatomical structures in medical images by a Generalized Hough Transform (GHT). As localization is often a pre-requisite for subsequent model-based segmentation, it is important to assess whether or not the GHT was able to locate the desired object. The GHT by its construction does not make this distinction. We present an approach to detect incorrect GHT localizations by deriving collective features of contributing GHT model points and by training a Support Vector Machine (SVM) classifier. On a training set of 204 cases, we demonstrate that for the detection of incorrect localizations classification errors of down to 3% are achievable. This is three times less than the observed intrinsic GHT localization error.
Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.
KEYWORDS: Image segmentation, Positron emission tomography, Tissues, Signal attenuation, Magnetic resonance imaging, Lung, Monte Carlo methods, Image processing, Image processing algorithms and systems, Breast
Recently introduced combined PET/MR scanners need to handle the specific problem that a limited MR field of view
sometimes truncates arm or body contours, which prevents an accurate calculation of PET attenuation correction maps.
Such maps of attenuation coefficients over body structures are required for a quantitatively correct PET image
reconstruction. This paper addresses this problem by presenting a method that segments a preliminary reconstruction
type of PET images, time of flight non-attenuation corrected (ToF-NAC) images, and outlining a processing pipeline that
compensates the arm or body truncation with this segmentation. The impact of this truncation compensation is
demonstrated together with a comparison of two segmentation methods, simple gray value threshold segmentation and a
watershed algorithm on a gradient image. Our results indicate that with truncation compensation a clinically tolerable
quantitative SUV error is robustly achievable.
With the recent introduction of combined Magnetic Resonance Imaging (MRI) / Positron Emission Tomography (PET)
systems, the generation of attenuation maps for PET based on MR images gained substantial attention. One approach for
this problem is the segmentation of structures on the MR images with subsequent filling of the segments with respective
attenuation values. Structures of particular interest for the segmentation are the pelvis bones, since those are among the
most heavily absorbing structures for many applications, and they can serve at the same time as valuable landmarks for
further structure identification. In this work the model-based segmentation of the pelvis bones on gradient-echo MR
images is investigated. A processing chain for the detection and segmentation of the pelvic bones is introduced, and the
results are evaluated using CT-generated "ground truth" data. The results indicate that a model based segmentation of the
pelvis bone is feasible with moderate requirements to the pre- and postprocessing steps of the segmentation.
Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like
localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung
cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper
are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based
on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database
Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations.
We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection
and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically
plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality.
For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy
on a per-lobe basis.
The bronchial tree is of direct clinical importance in the context of respective diseases, such as chronic obstructive
pulmonary disease (COPD). It furthermore constitutes a reference structure for object localization in the lungs and it
finally provides access to lung tissue in, e.g., bronchoscope based procedures for diagnosis and therapy. This paper
presents a comprehensive anatomical model for the bronchial tree, including statistics of position, relative and absolute
orientation, length, and radius of 34 bronchial segments, going beyond previously published results. The model has been
built from 16 manually annotated CT scans, covering several branching variants. The model is represented as a
centerline/tree structure but can also be converted in a surface representation. Possible model applications are either to
anatomically label extracted bronchial trees or to improve the tree extraction itself by identifying missing segments or
sub-trees, e.g., if located beyond a bronchial stenosis. Bronchial tree labeling is achieved using a naïve Bayesian
classifier based on the segment properties contained in the model in combination with tree matching. The tree matching
step makes use of branching variations covered by the model. An evaluation of the model has been performed in a leaveone-
out manner. In total, 87% of the branches resulting from preceding airway tree segmentation could be correctly
labeled. The individualized model enables the detection of missing branches, allowing a targeted search, e.g., a local rerun
of the tree-segmentation segmentation.
Presence of emphysema is recognized to be one of the single most significant risk factors in risk models for the
prediction of lung cancer. Therefore, an automatically computed emphysema score would be a prime candidate as an
additional numerical feature for computer aided diagnosis (CADx) for indeterminate pulmonary nodules. We have
applied several histogram-based emphysema scores to 460 thoracic CT scans from the IDRI CT lung image database,
and analyzed the emphysema scores in conjunction with 3000 nodule malignancy ratings of 1232 pulmonary nodules
made by expert observers. Despite the emphysema being a known risk factor, we have not found any impact on the
readers' malignancy rating of nodules found in a patient with higher emphysema score. We have also not found any
correlation between the number of expert-detected nodules in a patient and his emphysema score, or the relative
craniocaudal location of the nodules and their malignancy rating. The inter-observer agreement of the expert ratings was
excellent on nodule diameter (as derived from manual delineations), good for calcification, and only modest for
malignancy and shape descriptions such as spiculation, lobulation, margin, etc.
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the
quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are
visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely.
This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses
general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which
even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of
the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a
special fissure feature image, and a performance evaluation over a test data set showing an average segmentation
accuracy of 1 to 3 mm.
Respiratory motion is a complicating factor in radiation therapy, tumor ablation, and other treatments of the
thorax and upper abdomen. In most cases, the treatment requires a demanding knowledge of the location of
the organ under investigation. One approach to reduce the uncertainty of organ motion caused by breathing is
to use prior knowledge of the breathing motion. In this work, we extract lung motion fields of seven patients
in 4DCT inhale-exhale images using an iterative shape-constrained deformable model approach. Since data was
acquired for radiotherapy planning, images of the same patient over different weeks of treatment were available.
Although, respiratory motion shows a repetitive character, it is well-known that patient's variability in breathing
pattern impedes motion estimation. A detailed motion field analysis is performed in order to investigate the
reproducibility of breathing motion over the weeks of treatment. For that purpose, parameters being significant
for breathing motion are derived. The analysis of the extracted motion fields provides a basis for a further
breathing motion prediction. Patient-specific motion models are derived by averaging the extracted motion
fields of each individual patient. The obtained motion models are adapted to each patient in a leave-one-out test
in order to simulate motion estimation to unseen data. By using patient-specific mean motion models 60% of
the breathing motion can be captured on average.
KEYWORDS: Cartilage, Image segmentation, Bone, 3D modeling, Data modeling, Magnetic resonance imaging, Image processing, 3D image processing, Error analysis, Medical research
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first
applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by
iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to
position deformable cartilage models for each of the three bones with reference to the segmented bone models. After
initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value
gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated
the sensitivity of 83±6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage
volume measurement yielded an average error of 9±7% as secondary endpoint. For cartilage being a thin structure,
already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard
criterion.
During medical imaging and therapeutic interventions, pulmonary structures are in general subject to cardiac
and respiratory motion. This motion leads potentially to artefacts and blurring in the resulting image material
and to uncertainties during interventions. This paper presents a new automatic approach for surface based
motion tracking of pulmonary structures and reports on the results for cardiac and respiratory induced motion.
The method applies an active shape approach to ad-hoc generated surface representations of the pulmonary
structures for phase to phase surface tracking. Input of the method are multi-phase CT data, either cardiac or
respiratory gated. The iso-surface representing the transition between air or lung parenchyma to soft tissue,
is triangulated for a selected phase p0. An active shape procedure is initialised in the image of phase p1 using
the generated surface in p0. The used internal energy term penalizes shape deformation as compared to p0.
The process is iterated for all phases pi to pi+1 of the complete cycle. Since the mesh topology is the same for
all phases, the vertices of the triangular mesh can be treated as pseudo-landmarks defining tissue trajectories.
A dense motion field is interpolated. The motion field was especially designed to estimate the error margins
for radiotherapy. In the case of respiratory motion extraction, a validation on ten biphasic thorax CT images
(2.5mm slice distance) was performed with expert landmarks placed at vessel bifurcations. The mean error on
landmark position was below 2.6mm. We further applied the method to ECG gated images and estimated the
influence of the heart beat on lung tissue displacement.
Positron Emission Tomography (PET) images provide functional or metabolic information from areas of high concentration of [18F]fluorodeoxyglucose (FDG) tracer, the "hot spots". These hot spots can be easily detected by the eye, but delineation and size determination required e.g. for diagnosis and staging of cancer is a tedious task that demands for automation. The approach for such an automated hot spot segmentation described in this paper comprises
three steps: A region of interest detection by the watershed transform, a heart identification by an evaluation of scan lines, and the final segmentation of hot spot areas by a local threshold. The region of interest detection is the essential step, since it localizes the hot spot identification and the final segmentation. The heart identification is an example of how to differentiate between hot spots. Finally, we demonstrate the combination of PET and CT data. Our method is applicable to other techniques like SPECT.
For differential diagnosis of pulmonary nodules, assessment of contrast enhancement at chest CT scans after administration of contrast agent has been suggested. Likelihood of malignancy is considered very low if the contrast enhancement is lower than a certain threshold (10-20 HU). Automated average density measurement methods have been developed for that purpose. However, a certain fraction of malignant nodules does not exhibit significant enhancement when averaged over the whole nodule volume. The purpose of this paper is to test a new method for reduction of false negative results. We have investigated a method of showing not only a single averaged contrast enhancement number, but a more detailed enhancement curve for each nodule, showing the enhancement as a function of distance to boundary. A test set consisting of 11 malignant and 11 benign pulmonary lesions was used for validation, with diagnoses known from biopsy or follow-up for more than 24 months. For each nodule dynamic CT scans were available: the unenhanced native scan and scans after 60, 120, 180 and 240 seconds after onset of contrast injection (1 - 4 mm reconstructed slice thickness). The suggested method for measurement and visualization of contrast enhancement as radially resolved curves has reduced false negative results (apparently unenhancing but truly malignant nodules), and thus improved sensitivity. It proved to be a valuable tool for differential diagnosis between malignant and benign lesions using dynamic CT.
The purpose of this paper is to present an automated method for the extraction of the pulmonary vessel tree from multi-slice CT data. Furthermore we investigate a method for the separation of pulmonary arteries from veins. The vessel tree extraction is performed by a seed-point based front-propagation algorithm. This algorithm is based on a similar methodology as the bronchial tree segmentation and coronary artery tree extraction methods presented at earlier SPIE conferences. Our method for artery/vein separation is based upon the fact that the pulmonary artery tree accompanies the bronchial tree. For each extracted vessel segment, we evaluate a measure of "arterialness". This measure combines two components: a method for identifying candidate positions for a bronchus running in the vicinity of a given vessel on the one hand and a co-orientation measure for the vessel segment and bronchus candidates. The latter component rewards vessels running parallel to a nearby bronchus. The spatial orientation of vessel segments and bronchi is estimated by applying the structure tensor to the local gray-value neighbourhood. In our experiments we used multi slice CT datasets of the lung acquired by Philips IDT 16-slice, and Philips Brilliance 40-slice scanners. It can be shown that the proposed measure reduces the number of pulmonary veins falsely included into the arterial tree.
In modern multi slice CT scanners the increasing amount of data also increases the demand on image processing methods that assist the diagnosis. For the detection and classification of lung nodules in a follow up study it is very helpful to have the slices of a previous scan aligned with the slices of the current scan. This is a typical problem of image registration, for which different types of solutions exist. We investigated the accuracy and computation times of a rigid body, an affine, and a spline based elastic registration approach on the complete data set, and compared the results to a method where the registration was preceded by a segmentation of the lung volume. The registration quality was determined on a ground truth of previously determined lung nodule locations by measuring the average distance of corresponding nodules. It was found that an affine registration is slightly better than a rigid body registration, and that both are much faster than the elastic registration, which in turn showed the best registration quality. A good compromise was the affine registration on a previously segmented lung volume, which in total is not much slower than the registration without segmentation, but shows better alignment and higher robustness.
Multi slice CT (MSCT) scanners have the advantage of high and isotropic image resolution, which broadens the range of examinations for CT angiography (CTA). A very important method to present the large amount of high-resolution 3D data is the visualization by maximum intensity projections (MIP). A problem with MIP projections in angiography is that bones often hide the vessels of interest, especially the scull and vertebral column. Software tools for a manual selection of bone regions and their suppression in the MIP are available, but processing is time-consuming and tedious. A highly computer-assisted of even fully automated suppression of bones would considerably speed up the examination and probably increase the number of examined cases. In this paper we investigate the suppression (or removal) of bone regions in 3D CT data sets for vascular examinations of the head with a visualization of the carotids and the circle of Willis.
Most of the previous approaches to computer aided lung nodule detection have been designed for and tested on conventional CT with slice thickness of 5-10 mm. In this paper, we report results of a specifically designed detection algorithm which is applied to 1 mm slice data from multi slice CT. We see two prinicipal advantages of high resolution CT data with respect to computer aided lung nodule detection: First of all, the algorithm can evaluate the fully isotropic three dimensional shape information of potential nodules and thus resolve ambiguities between pulmonary nodules and vessels. Secondly, the use of 1 mm slices allows the direct utilization of the Hounsfield values due to the absence of the partial volume effect (for objects larger than 1 mm). Computer aided detection of small lung nodules (>= 2 mm) may thus experience a break-through in clinical relevance with the use of high resolution CT. The detection algorithm has been applied to image data sets from patients in clinical routine with a slice thickness of 1\ts mm and reconstruction intervals between 0.5 and 1 mm, with hard- and soft-tissue reconstruction filters. Each thorax data set comprises 300-500 images. More than 20 000 CT slices from 50 CT studies were analyzed by the computer program, and 12 studies have so far been reviewed by an experienced radiologist. Of 203 nodules with diameter >= 2 mm (including pleura-attached nodules), the detection algorithm found 193 (sensitivity of 95%), with 4.4 false positives per patient. Nodules attached to the lung wall are algorithmically harder to detect, but we observe the same high detection rate. The false positive rate drops below 1 per study for nodules >= 4 mm.
In radiographic images the actual region of interest (ROI), i.e. the collimation field, is often smaller than the overall image detector area. Collimation devices (shutters) and lead aprons confine the X-ray beam to the anatomically relevant region. Therefore, large shuttered areas with low radiation intensity may exist in the image. This background may however show strong radiation scatter features, so that simple thresholding or histogram analysis approaches fail. Automated recognition of the collimation field is necessary with respect to optimal contrast adjustment of the monitor and film-printer representation, and accelerates the workflow in comparison to manual ROI settings. In our approach we first identify several hundreds of shutter edge candidates by means of a Hough transform. Then several thousand ROI hypotheses are checked. The objective is to maximize at the same time the enclosed area, the enclosed image intensity, and the enclosed second derivative (Laplace value) of the intensity. The maximization of the Laplace area integral has been found to be the single most powerful feature for finding the true collimation field. The approach was successfully tested on image sets from clinical routine.
The theory of stochastic signal processing describes signals, which contain statistical fluctuations, by their statistical properties like mean and variance. Since the fluctuations, or simply the noise, in coherent x-ray scatter signals are known to follow Poisson statistics, this theory can be used to derive detection properties under different system conditions. This paper will show how the Poisson-noise can be transformed into white noise and that a false detection rate for signals containing white noise can be calculated. Furthermore the application to the optimization of collimator design will be studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.