Single Plane Illumination Microscopy (SPIM) is an emerging microscopic technique that enables live imaging of large
biological specimens in their entirety. By imaging the biological sample from multiple angles, SPIM has the potential to
achieve isotropic resolution throughout relatively large biological specimens. For every angle, however, only a shallow
section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. Existing
intensity-based registration techniques still struggle to robustly and accurately align images that are characterized by limited
overlap and/or heavy blurring. To be able to register such images, we add sub-resolution fluorescent beads to the rigid
agarose medium in which the imaged specimen is embedded. For each segmented bead, we store the relative location
of its n nearest neighbors in image space as rotation-invariant geometric local descriptors. Corresponding beads between
overlapping images are identified by matching these descriptors. The bead correspondences are used to simultaneously
estimate the globally optimal transformation for each individual image. The final output image is created by combining
all images in an angle-independent output space, using volume injection and local content-based weighting of contributing
images. We demonstrate the performance of our approach on data acquired from living embryos of Drosophila and fixed
adult C.elegans worms. Bead-based registration outperformed intensity-based registration in terms of computation speed
by two orders of magnitude while producing bead registration errors below 1 μm (about 1 pixel). It, therefore, provides
an ideal tool for processing of long term time-lapse recordings of embryonic development consisting of hundreds of time points.
Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic
technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological
sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological
specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution,
whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we
propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with
content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For
the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as
well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is
substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method
on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.
We present a new standard atlas of the human brain based on magnetic resonance images. The atlas was generated using
unbiased population registration from high-resolution images obtained by multichannel-coil acquisition at 3T in a group
of 24 normal subjects. The final atlas comprises three anatomical channels (T1-weighted, early and late spin echo), three
diffusion-related channels (fractional anisotropy, mean diffusivity, diffusion-weighted image), and three tissue probability
maps (CSF, gray matter, white matter). The atlas is dynamic in that it is implicitly represented by nonrigid transformations
between the 24 subject images, as well as distortion-correction alignments between the image channels in each subject.
The atlas can, therefore, be generated at essentially arbitrary image resolutions and orientations (e.g., AC/PC aligned),
without compounding interpolation artifacts. We demonstrate in this paper two different applications of the atlas: (a)
region definition by label propagation in a fiber tracking study is enabled by the increased sharpness of our atlas compared
with other available atlases, and (b) spatial normalization is enabled by its average shape property. In summary, our atlas
has unique features and will be made available to the scientific community as a resource and reference system for future
imaging-based studies of the human brain.
Active shape models (ASMs) have been studied extensively for the statistical analysis of three-dimensional shapes. These models can be used as prior information for segmentation and other image analysis tasks. In order to create an ASM, correspondence between surface points on the training shapes must be provided. Various groups have previously investigated methods that attempted to provide correspondences between points on pre-segmented shapes. This requires a time-consuming segmentation stage before the statistical analysis can be performed. This paper presents a method of ASM generation that requires as input only a single segmented template shape obtained from a mean grayscale image across the training set. The triangulated mesh representing this template shape is then propagated to the other shapes in the training set by a nonrigid transformation. The appropriate transformation is determined by intensity-based nonrigid registration of the corresponding grayscale images. Following the transformation of the template, the mesh is treated as an active surface, and evolves towards the image edges while preserving certain curvature constraints. This process results in automatic segmentation of each shape, but more importantly also provides an automatic correspondence between the points on each shape. The resulting meshes are aligned using Procrustes analysis, and a principal component analysis is performed to produce the statistical model. For demonstration, a model of the lower cervical vertebrae (C6 and C7) was created. The resulting model is evaluated for accuracy, compactness, and generalization ability.
Confocal microscopy (CM) is a powerful image acquisition technique that is well established in many biological applications. It provides 3-D acquisition with high spatial resolution and can acquire several different channels of complementary image information. Due to the specimen extraction and preparation process, however, the shapes of imaged objects may differ considerably from their in vivo appearance. Magnetic resonance microscopy (MRM) is an evolving variant of magnetic resonance imaging, which achieves microscopic resolutions using a high magnetic field and strong magnetic gradients. Compared to CM imaging, MRM allows for in situ imaging and is virtually free of geometrical distortions. We propose to combine the advantages of both methods by unwarping CM images using a MRM reference image. Our method incorporates a sequence of image processing operators applied to the MRM image, followed by a two-stage intensity-based registration to compute a nonrigid coordinate transformation between the CM images and the MRM image. We present results obtained using CM images from the brains of 20 honey bees and a MRM image of an in situ bee brain.
This paper describes the application and validation of automatic segmentation of three-dimensional images by non-rigid registration to atlas images. The registration-based segmentation technique is applied to confocal microscopy images acquired from the brains of 20 bees. Each microscopy image is registered to an already segmented reference atlas image using an intensity-based non-rigid image registration algorithm. This paper evaluates and compares four different approaches: registration to an individual atlas image (IND), registration to an average shape atlas image (AVG), registration to the most similar image from a database of individual atlas images (SIM), and registration to all images from a database of individual atlas images with subsequent fuzzy segmentation (FUZ). For each strategy, the segmentation performance of the algorithm was quantified using both a global segmentation correctness measure and the similarity index. Manual segmentation of all microscopy images served as a gold standard. The best segmentation result (median correctness 91 percent of all voxels) was achieved using the FUZ paradigm. Robustness was also the best for this strategy (minimum correctness over all individuals 84 percent). The mean similarity index value of segmentations produced by the FUZ paradigm is 0.86
(IND, 0.81; AVG, 0.84; SIM, 0.82). The superiority of the FUZ paradigm is statistically significant (two-sided paired t-test, P<0.001).
Calculating digitally reconstructed radiographs (DRRs)is an important step in intensity-based fluoroscopy-to-CT image registration methods. Unfortunately, the standard techniques to generate DRRs involve ray casting and run in time O(n3),where we assume that n is approximately the size (in voxels) of one side of the DRR as well as one side of the CT volume. Because of this, generation of DRRs is typically the rate-limiting step in the execution time of intensity-based fluoroscopy-to-CT registration algorithms. We address this issue by extending light field rendering techniques from the computer graphics community to generate DRRs instead of conventional rendered images. Using light fields allows most of the computation to be performed in a preprocessing step;after this precomputation step, very accurate DRRs can be generated in time O(n2). Using a light field generated from 1,024 DRRs of resolution 256×256, we can create new DRRs that appear visually identical to ones generated by conventional ray casting. Importantly, the DRRs generated using the light field are computed over 300 times faster than DRRs generated using conventional ray casting(50 vs.17,000 ms on a PC with a 2 GHz Intel Pentium 4 processor).
This paper describes an algorithm for clipping of m-dimensional objects that intersect a compact n-dimensional rectangular area. The new algorithm is an extension of a method for line clipping in three dimensions. Motivated by the need for efficient algorithms for example when comparing three-dimensional (3-D) images to each other, our method allows for the incremental computation of the subset of voxels in a discretely sampled image which are located inside a second image. Limited fields of view (rectangular regions of interest) in either image are easily supported. Application of our algorithm does not require the generation of an explicit geometrical description of the image intersection. Besides its generality with respect to the dimensions of the objects under consideration, our clipping method solves the problem of discriminating between points inside the clipping region and points on its edge, which is important when problems such as voxel intensity interpolation are only well-defined within the clipping area.
Despite the growing popularity of frameless image-guided surgery systems, stereotactic head frame systems are widely accepted by neurosurgeons and are still commonly used to perform stereotactic biopsy, functional procedures, and stereotactic radiosurgery. In this study, we investigate the accuracy of the Cosman-Roberts-Wells (CRW) stereotactic frame system when the mechanical load on the frame changes between pre-operative imaging and the intervention due to different patient position - supine during imaging, prone during intervention. We analyze CT images acquired from 12 patients who underwent stereotactic biopsy or stereotactic radiosurgery. Two CT images were acquired for each patient, one with the patient in the supine position and one in the prone position. The prone images were registered to the respective supine images using an intensity-based registration algorithm, once using only the frame and once using only the head. The difference between the transformations produced by these two registrations describes the movement of the patient's head with respect to the frame due to mechanical distortion of the latter. The maximum frame-based registration error between supine and prone positions was 2.8 mm, greater than 2 mm in two patients, and greater than 1.5 mm in five patients. Anterior-posterior translation is the dominant component of the difference transformation for most of these patients. In general, the magnitude of the movement increased with brain volume, which is an index of head weight. We conclude that in order to minimize frame-based registration error due to a change in the mechanical load on the frame, frame-based stereotactic procedures should be performed with the patient in the identical position during imaging and intervention.
Registration of 2-D projection images and 3-D volume images is still a largely unsolved problem. In order to register a pre-operative CT image to an intra-operative 2-D x-ray image, one typically computes simulated x-ray images from the attenuation coefficients in the CT image (Digital Reconstructed Radiograph, DRR). The simulated images are then compared to the actual image using intensity-based similarity measures to quantify the correctness of the current relative pose. However, the spatial information present in the CT is lost in the process of computing projections. This paper first introduces a probabilistic extension to the computation of DRRs that preserves much of the spatial separability of tissues along the simulated rays. In order to handle the resulting non-scalar data in intensity-based registration, we propose a way of computing entropy-based similarity measures such as mutual information (MI) from probabilistic images. We give an initial evaluation of the feasibility of our novel image similarity measure for 2-D to 3-D registration by registering a probabilistic DRR to a deterministic DRR computed from patient data used in frameless stereotactic radiosurgery.
Segmentation of fluoroscopy images is useful for fluoroscopy-to-CT image registration. However, it is impossible to assign a unique tissue type to each pixel. Rather each pixel corresponds to an entire path of tissue types encountered along a ray from the X-ray source to the detector plate. Furthermore, there is an inherent many-to-one mapping between paths and pixel values. We address these issues by assigning to each pixel not a scalar value but a fuzzy vector of tissue probabilities. We perform this segmentation in a probabilistic way by first learning typical distributions of bone, air, and soft tissue that correspond to certain fluoroscopy image values and then assigning each value to a probability distribution over its most likely generating paths. We then evaluate this segmentation on ground truth patient data.
Medical image data is usually represented by a uniformly spaced grid of voxels. However, CT scanners for example are capable of producing non-uniformly spaced slice images. This is desirable when for a particular patient some regions (lesions) need to be imaged with a high resolution, while a lower resolution would be sufficient in other areas. Such an adaptive slice spacing can significantly reduce X-ray dose, thus directly benefitting the patient. Unfortunately, computational handling of the resulting volume data is far less efficient than that of uniformly spaced images. To deal with this problem, the present paper introduces a novel data structure for non-uniformly spaced image coordinates, the so-called virtual uniform axes. By a generalization of Euclid's greatest common divider (GCD) algorithm, a table of virtual voxels on a uniform grid is produced. Each of the uniform voxels in the virtual grid holds a pointer to the corresponding voxel in the original, non-uniform grid. Finding a voxel in the virtual uniform image can be done in constant time as compared to logarithmic time for finding a voxel in a non-uniform image. This is achieved with significantly less additional storage than by resampling the image data itself to a uniform grid. Interpolation artifacts are also completely avoided.
In this paper, we demonstrate a technique for modeling liver motion during the respiratory cycle using intensity-based free-form deformation registration of gated MR images. We acquired 3D MR image sets (multislice 2D) of the abdomen of four volunteers at end-inhalation, end-exhalation, and eight time points in between using respiratory gating. We computed the deformation field between the images using intensity-based rigid and non-rigid registration algorithms. The non-rigid transformation is a free-form deformation with B-spline interpolation between uniformly-spaced control points. The transformations between inhalation and exhalation were visually inspected. Much of the liver motion is cranial-caudal translation, and thus the rigid transformation captures much of the motion. However, there is still substantial residual deformation of up to 2 cm. The free-form deformation produces a motion field that appears on visual inspection to be accurate. This is true for the liver surface, internal liver structures such as the vascular tree, and the external skin surface. We conclude that abdominal organ motion due to respiration can be satisfactorily modeled using an intensity-based non-rigid 4D image registration approach. This allows for an easier and potentially more accurate and patient-specific deformation field computation than physics-based models using assumed tissue properties and acting forces.
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.