In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary
CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is
used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion
of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine
column is a very challenging task, especially if no surrounding reference structures can be taken into account.
Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very
limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically
labeling the spine column by evaluating similarities between given models and vertebral objects. In one method,
object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each
vertebral object. In the other method, appearance models containing mean gray value information are registered
to each vertebral object using cross and local correlation as similarity measures for the optimization function.
The GHT is advantageous in terms of computational performance but cuts back concerning the identification
rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting
of 63 disparate input images using rigid image registration with local correlation as similarity measure.
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging
modalities, e.g., computed tomography (CT), magnetic resonance (MR) and rotational X-ray volume imaging. While many
segmentation approaches exist, most of them are developed for a single, specific imaging modality and a single organ. In
clinical practice, however, it is becoming increasingly important to handle multiple modalities: First due to a case-specific
choice of the most suitable imaging modality (e.g. CT versus MR), and second in order to integrate complementary data
from multiple modalities. In this paper, we present a single, integrated segmentation framework which can easily be
adapted to a range of imaging modalities and organs. Our algorithm is based on shape-constrained deformable models. Key
elements are (1) a shape model representing the geometry and variability of the target organ of interest, (2) spatially varying
boundary detection functions representing the gray value appearance of the organ boundaries for the specific imaging
modality or protocol, and (3) a multi-stage segmentation approach. Focussing on fully automatic heart segmentation, we
present evaluation results for CT,MR (contrast enhanced and non-contrasted), and rotational X-ray angiography (3-D RA).
We achieved a mean segmentation error of about 0.8mm for CT and (non-contrasted) MR, 1.0mm for contrast-enhanced
MR and 1.3mm for 3-D RA, demonstrating the success of our segmentation framework across modalities.
To make accurate decisions based on imaging data, radiologists must associate the viewed imaging data with the corresponding anatomical structures. Furthermore, given a disease hypothesis possible image findings which verify the hypothesis must be considered and where and how they are expressed in the viewed images. If rare anatomical variants, rare pathologies, unfamiliar protocols, or ambiguous findings are present, external knowledge sources such as medical
encyclopedias are consulted. These sources are accessed using keywords typically describing anatomical structures, image findings, pathologies.
In this paper we present our vision of how a patient's imaging data can be automatically enhanced with anatomical knowledge as well as knowledge about image findings. On one hand, we propose the automatic annotation of the images with labels from a standard anatomical ontology. These labels are used as keywords for a medical encyclopedia such as STATdx to access anatomical descriptions, information about pathologies and image findings. On the other hand we
envision encyclopedias to contain links to region- and finding-specific image processing algorithms. Then a finding is
evaluated on an image by applying the respective algorithm in the associated anatomical region.
Towards realization of our vision, we present our method and results of automatic annotation of anatomical structures in 3D MRI brain images. Thereby we develop a complex surface mesh model incorporating major structures of the brain and a model-based segmentation method. We demonstrate the validity by analyzing the results of several training and segmentation experiments with clinical data focusing particularly on the visual pathway.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.