Ultrasound imaging is an attractive modality for real-time image-guided interventions. Fusion of US imaging with a diagnostic imaging modality such as CT shows great potential in minimally invasive applications such as liver biopsy and ablation. However, significantly different representation of liver in US and CT turns this image fusion into a challenging task, in particular if some of the CT scans may be obtained without contrast agents. The liver surface, including the diaphragm immediately adjacent to it, typically appears as a hyper-echoic region in the ultrasound image if the proper imaging window and depth setting are used. The liver surface is also well visualized in both contrast and non-contrast CT scans, thus making the diaphragm or liver surface one of the few attractive common features for registration of US and non-contrast CT. We propose a fusion method based on point-to-volume registration of liver surface segmented in CT to a processed electromagnetically (EM) tracked US volume. In this approach, first, the US image is pre-processed in order to enhance the liver surface features. In addition, non-imaging information from the EM-tracking system is used to initialize and constrain the registration process. We tested our algorithm in comparison with a manually corrected vessel-based registration method using 8 pairs of tracked US and contrast CT volumes. The registration method was able to achieve an average deviation of 12.8mm from the ground truth measured as the root mean square Euclidean distance for control points distributed throughout the US volume. Our results show that if the US image acquisition is optimized for imaging of the diaphragm, high registration success rates are achievable.
Microwave ablation (MWA) has become a recommended treatment modality for interventional cancer treatment.
Compared with radiofrequency ablation (RFA), MWA provides more rapid and larger-volume tissue heating. It
allows simultaneous ablation from different entry points and allows users to change the ablation size by controlling
the power/time parameters. Ablation planning systems have been proposed in the past, mainly addressing the needs
for RFA procedures. Thus a planning system addressing MWA-specific parameters and workflows is highly
desirable to help physicians achieve better microwave ablation results. In this paper, we design and implement an
automated MWA planning system that provides precise probe locations for complete coverage of tumor and margin.
We model the thermal ablation lesion as an ellipsoidal object with three known radii varying with the duration of the
ablation and the power supplied to the probe. The search for the best ablation coverage can be seen as an iterative
optimization problem. The ablation centers are steered toward the location which minimizes both un-ablated tumor
tissue and the collateral damage caused to the healthy tissue. We assess the performance of our algorithm using
simulated lesions with known "ground truth" optimal coverage. The Mean Localization Error (MLE) between the
computed ablation center in 3D and the ground truth ablation center achieves 1.75mm (Standard deviation of the
mean (STD): 0.69mm). The Mean Radial Error (MRE) which is estimated by comparing the computed ablation radii
with the ground truth radii reaches 0.64mm (STD: 0.43mm). These preliminary results demonstrate the accuracy
and robustness of the described planning algorithm.
The 3D fusion of tracked ultrasound with a diagnostic CT image has multiple benefits in a variety of interventional
applications for oncology. Still, manual registration is a considerable drawback to the clinical workflow and hinders the
widespread clinical adoption of this technique. In this paper, we propose a method to allow for an image-based
automated registration, aligning multimodal images of the liver. We adopt a model-based approach that rigidly matches
segmented liver shapes from ultrasound (U/S) and diagnostic CT imaging. Towards this end, a novel method which
combines a dynamic region-growing method with a graph-based segmentation framework is introduced to address the
challenging problem of liver segmentation from U/S. The method is able to extract liver boundary from U/S images after
a partial surface is generated near the principal vector from an electromagnetically tracked U/S liver sweep. The liver
boundary is subsequently expanded by modeling the problem as a graph-cut minimization scheme, where cost functions
used to detect optimal surface topology are determined from adaptive priors of neighboring surface points. This allows
including boundaries affected by shadow areas by compensating for varying levels of contrast. The segmentation of the
liver surface is performed in 3D space for increased accuracy and robustness. The method was evaluated in a study
involving 8 patients undergoing biopsy or radiofrequency ablation of the liver, yielding promising surface segmentation
results based on ground-truth comparison. The proposed extended segmentation technique improved the fiducial
landmark registration error compared to a point-based registration (7.2mm vs. 10.2mm on average, respectively), while
achieving tumor target registration errors that are statistically equivalent (p > 0.05) to state-of-the-art methods.
In an effort to improve the accuracy of transrectal ultrasound (TRUS)-guided needle biopsies of the prostate, it is
important to understand the non-rigid deformation of the prostate. To understand the deformation of the prostate when
an endorectal coil (ERC) is inserted, we develop an elastic registration framework to register prostate MR images with
and without ERC. Our registration framework uses robust point matching (RPM) to get the correspondence between the
surface landmarks in the source and target volumes followed by elastic body spline (EBS) registration based on the
corresponding landmark pairs. Together with the manual rigid alignment, we compared our registration framework
based on pure surface landmarks to the registration based on both surface and internal landmarks in the center of the
prostate. In addition, we assessed the impact of constraining the warping in the central zone of the prostate using a
Gaussian weighting function. Our results show that elastic surface-driven prostate registration is feasible, and that
internal landmarks further improve the registration in the central zone while they have little impact on the registration in
the peripheral zone of the prostate. Results varied case by case depending on the accuracy of the prostate segmentation
and the amount of warping present in each image pair. The most accurate results were obtained when using a Gaussian
weighting in the central zone to limit the EBS warping driven by surface points. This suggests that a Gaussian constrain
of the warping can effectively compensate for the limitations of the isotropic EBS deformation model, and for erroneous
warping inside the prostate created by inaccurate surface landmarks driving the EBS.
The multimodal fusion of spatially tracked real-time ultrasound (US) with a prior CT scan has demonstrated clinical
utility, accuracy, and positive impact upon clinical outcomes when used for guidance during biopsy and radiofrequency
ablation in the treatment of cancer. Additionally, the combination of CT-guided procedures with positron emission
tomography (PET) may not only enhance navigation, but add valuable information regarding the specific location and
volume of the targeted masses which may be invisible on CT and US. The accuracy of this fusion depends on reliable,
reproducible registration methods between PET and CT. This can avoid extensive manual efforts to correct registration
which can be long and tedious in an interventional setting. In this paper, we present a registration workflow for
PET/CT/US fusion by analyzing various image metrics based on normalized mutual information and cross-correlation,
using both rigid and affine transformations to automatically align PET and CT. Registration is performed between the
CT component of the prior PET-CT and the intra-procedural CT scan used for navigation to maximize image
congruence. We evaluate the accuracy of the PET/CT registration by computing fiducial and target registration errors
using anatomical landmarks and lesion locations respectively. We also report differences to gold-standard manual
alignment as well as the root mean square errors for CT/US fusion. Ten patients with prior PET/CT who underwent
ablation or biopsy procedures were selected for this study. Studies show that optimal results were obtained using a crosscorrelation
based rigid registration with a landmark localization error of 1.1 +/- 0.7 mm using a discrete graphminimizing
scheme. We demonstrate the feasibility of automated fusion of PET/CT and its suitability for multi-modality
ultrasound guided navigation procedures.
Automatic segmentation of the prostate in transrectal ultrasound (TRUS) may improve the fusion of TRUS
with magnetic resonance imaging (MRI) for TRUS/MRI-guided prostate biopsy and local therapy. It is very
challenging to segment the prostate in TRUS images, especially for the base and apex of the prostate due to
the large shape variation and low signal-to-noise ratio. To successfully segment the whole prostate from 2D
TRUS video sequences, this paper presents a new model based algorithm using both global population-based
and adaptive local shape statistics to guide segmentation. By adaptively learning shape statistics in a local
neighborhood during the segmentation process, the algorithm can effectively capture the patient-specific shape
statistics and the large shape variations in the base and apex areas. After incorporating the learned shape
statistics into a deformable model, the proposed method can accurately segment the entire gland of the prostate
with significantly improved performance in the base and apex. The proposed method segments TRUS video in
a fully automatic fashion. In our experiments, 19 video sequences with 3064 frames in total grabbed from 19
different patients for prostate cancer biopsy were used for validation. It took about 200ms for segmenting one
frame on a Core2 1.86 GHz PC. The average mean absolute distance (MAD) error was 1.65±0.47mm for the
proposed method, compared to 2.50±0.81mm and 2.01±0.63mm for independent frame segmentation and frame
segmentation result propagation, respectively. Furthermore, the proposed method reduced the MAD errors by
49.4% and 18.9% in the base and by 55.6% and 17.7% in the apex, respectively.
MRI is currently the most promising imaging modality for prostate cancer diagnosis due to its high resolution and multiparametric
nature. However, currently there is no standard for integration of diagnostic information from different MRI
sequences. We propose a method to increase the diagnostic accuracy of MRI by correlating biopsy specimens with four
MRI sequences including T2 weighted MRI, Diffusion Weight Imaging, Dynamic Contrast Enhanced MRI and MRI
spectroscopy. This method uses device tracking and image fusion to determine the specimen's position on MRI images.
The proposed method is unbiased and cost effective. It does not substantially interfere with the standard biopsy
workflow, allowing it to be easily accepted by physicians. A study of 41 patients was carried out to validate the
approach. The performance of all four MRI sequences in various combinations is reported. Guidelines are given for
multi-parametric imaging and tracked biopsy of prostate cancer.
Automatic prostate segmentation in transrectal ultrasound (TRUS) can be used to register TRUS with magnetic
resonance (MR) images for TRUS/MR-guided prostate interventions. However, robust and automated prostate
segmentation is challenging due to not only the low signal to noise ratio in TRUS but also the missing boundaries
in shadow areas caused by calcifications or hyper-dense prostate tissue. Lack of image information in those
areas is a barrier for most existing segmentation methods, which normally leads to user interaction for manual
correction. This paper presents a novel method to utilize prior shapes estimated from partial contours to guide
an optimal search for prostate segmentation. The proposed method is able to automatically extract prostate
boundary from 2D TRUS images without user interaction for correcting shapes in shadow areas. In our approach,
the point distribution model was first used to learn shape priors of prostate from manual segmentation results.
During segmentation, the missing boundaries in shadow areas are estimated by using a new partial active shape
model, which uses partial contour as input but returns complete estimated shape. Prostate boundary is then
obtained by using a discrete deformable model with optimal search, which is implemented efficiently by using
dynamic programming to produce robust segmentation results. The segmentation of each frame is performed in
multi-scale for robustness and computational efficiency. In our experiments of segmenting 162 images grabbed
from ultrasound video sequences of 10 patients, the average mean absolute distance was 1.79mm±0.95mm. The
proposed method was implemented in C++ based on ITK and took about 0.3 seconds to segment the prostate
from a 640x480 image on a Core2 1.86 GHz PC.
Organ motion was quantified and motion compensation strategies for soft-tissue navigation were evaluated in a porcine
model. Organ motion due to patient repositioning, and respiratory motion during ventilated breathing were quantified.
Imaging was performed on a 16-slice CT scanner. Organ motion due to repositioning was studied by attaching 7
external skin fiducials and inserting 7 point fiducials in the livers of ventilated pigs. The pigs were imaged repeatedly in
supine and decubitus positions. Registrations between the images were obtained using either all external fiducials or 6
of the 7 internal fiducials. Target registration errors (TRE) were computed by using the leave-one-out technique.
Respiratory organ motion was studied by inserting 7 electromagnetically (EM) tracked needles in the livers of 2 pigs.
One needle served as primary target, the remaining six served as reference needles. In addition, 6 EM tracked skin
fiducials, 5 passive skin fiducials, and one dynamic reference tracker were attached. Registrations were obtained using
three different methods: Continuous registration with the tracking data from internal and external tracked fiducials, and
one-time registration using the passive skin fiducials and a tracked pointer with dynamic reference tracking. The TRE
for registering images obtained in supine position after an intermittent decubitus position ranged from 3.3 mm to 24.6
mm. Higher accuracy was achieved with internal fiducials (mean TRE = 6.4 mm) than with external fiducials (mean
TRE = 16.7 mm). During respiratory motion, the FRE and TRE were shown to be correlated and were used to
demonstrate automatic FRE-based gating. Tracking of target motion relative to a reference time point was achieved by
registering nearby reference trackers with rigid and affine transformations. Linear motion models based on external and
internal reference trackers were shown to reduce the target motion by up to 63% and 90%, respectively.
This paper presents an ultrasound guidance system for needle placement procedures. The system integrates a real-time
3D ultrasound transducer with a 3D localizer and a tracked needle to enable real-time visualization of the needle in
ultrasound. The system uses data streaming to transfer real-time ultrasound volumetric images to a separate workstation
for visualization. Multi-planar reconstructions of the ultrasound volume are computed at the workstation using the
tracking information, allowing for real-time visualization of the needle in ultrasound without aligning the needle with the
transducer. The system may simplify the needle placement procedure and potentially reduce the levels of skill and
training needed to perform accurate needle placements. The physician can therefore focus on the needle placement
procedure without paying extra attention to perfect mid-plane alignment of the needle with the ultrasound image plane.
In addition, the physician has real-time visual feedback of the needle and the target, even before the needle enters the
patient's skin, allowing the procedure to be easily, safely and accurately planned. The superimposed needle can also
greatly improve the sometimes poor visualization of the needle in an ultrasound image (e.g. in between ribs). Since the
free-hand needle is not inserted through any fixed needle channel, the physician can enjoy full freedom to select the
needle's orientation or position. No cumbersome accessories are attached to the ultrasound transducer, allowing the
physician to use his or her previous experience with regular ultrasound transducers. 3D Display of the target in relation
to the treatment volume can help verify adequacy of tumor ablation as well.
The purpose of this study was to examine the effects of different sensor orientation on the positional accuracy of an AC
electromagnetic tracking system, the second generation NDI Aurora, within a CT scanner environment. A three-axis
positioning robot was used to move three electromagnetically tracked needles above the CT table throughout a 30cm by
30cm by 30cm volume sampled in 2.5cm steps. All three needle tips were held within 2mm of each other, with the
needle axes orthogonally located in the +x, +y, and +z directions of the Aurora coordinate system. The corresponding
position data was captured from the Aurora for each needle and was registered to the positioning system data using a
rigid body transformation minimizing the least squares L2-norm. For all three needle orientations the largest errors were
observed farthest from the field generator and closest to the CT table. However, the 3D distortion error patterns were
different for each needle, demonstrating that the sensor orientation has an effect on the positional measurement of the
sensor. This suggests that the effectiveness of using arrays of reference sensors to model and correct for metal distortions
may depend strongly on the orientation of the reference sensors in relation to the orientation of the tracked device. In an
ideal situation, the reference sensors should be oriented in the same direction as the tracked needle.
A system for fusion of realtime transrectal ultrasound (TRUS) with pre-acquired 3D images of the prostate was
designed and demonstrated in phantoms and volunteer patients. Biopsy guides for endocavity ultrasound transducers
were equipped with customized 6 degree-of-freedom (DoF) electromagnetic (EM) tracking sensors, compatible with the
Aurora EM tracking system (Northern Digital Inc, NDI, Waterloo, ON, Canada). The biopsy guides were attached to an
ultrasound probe and calibrated to map tracking coordinates with ultrasound image coordinates. Six cylindrical gold
seeds were placed in a prostate phantom to serve as fiducial markers. The fiducials were first identified manually in 3T
magnetic resonance (MR) images collected with an endorectal coil. The phantom was then imaged with tracked realtime
TRUS and the fiducial markers were identified in the live image using custom software. Rigid registrations between
MR and ultrasound image space were computed and evaluated using subsets of the fiducial markers. Twelve patients
were scanned with 3T MRI and TRUS for biopsy and seed placement. In ten patients, volumetric ultrasound images
were reconstructed from 2D sweeps of the prostate and were manually registered with the MR. The rigid registrations
were used to display live TRUS images fused with spatially corresponding realtime multiplanar reconstructions (MPRs)
of the MR image volume. Registration accuracy was evaluated by segmenting the prostate in the MR and volumetric
ultrasound and computing distance measures between the two segmentations. In the phantom experiments, registration
accuracies of 2.2 to 2.3 mm were achieved. In the patient studies, the average root mean square distance between the
MR and TRUS segmentations was 3.1 mm, the average Hausdorff distance was 9.8 mm. Deformation of the prostate
during MR and TRUS scan was identified as the primary source of error. Realtime MR/TRUS image fusion is feasible
and is a promising approach to improved target visualization during TRUS-guided biopsy or therapy procedures.
Three-dimensional (3D) ultrasound is ideally suited to monitor internal organ motion since it offers real-time volumetric
imaging without exposing the patient to radiation. We extend a two dimensional (2D) region-tracking algorithm, which
was originally used in computer vision, to monitor internal organ motion in 3D. A volume of interest is first selected in
an ultrasound volume as a reference. The sum of squared differences is used as the similarity measure to register the
reference to each successive volume frame. A transformation model is used to describe the motion and geometric
deformation of the reference. The Gauss-Newton method is used to solve the optimization problem. In order to improve
the algorithm's efficiency, the Jacobian matrix is decomposed as a product of a time-varying matrix and a constant
matrix. The constant matrix is pre-computed to reduce the load of online computation. The algorithm was tested on
targets under respiratory motion and cardiac motion. The experimental results show that the transformation model of the
algorithm can approximate the geometric distortion of the reference template. With a properly selected reference with
rich texture information, the algorithm is sufficiently accurate and robust to follow target motion, and fast enough to be
used in real time.
The purpose of this study was to quantify the effects of a computed tomography (CT) scanner environment on the
positional accuracy of an AC electromagnetic tracking system, the second generation NDI Aurora. A three-axis
positioning robot was used to move an electromagnetically tracked needle above the CT table throughout a 30cm by
30cm axial plane sampled in 2.5cm steps. The corresponding position data was captured from the Aurora and was
registered to the positioning system data using a rigid body transformation minimizing the least squares L2-norm. Data
was sampled at varying distances from the CT gantry (three feet, two feet, and one foot) and with the CT table in a
nominal position and lowered by 10cm. A coordinate system was defined with the x axis normal to the CT table and the
origin at the center of the CT table, and the z axis spanning the table in the lateral direction with the origin at the center
of the CT table. In this coordinate system, the positional relationships of each sampled point, the CT table, and the
Aurora field generator are clearly defined. This allows error maps to be displayed in accurate spatial relationship to the
CT scanner as well as to a representative patient anatomy. By quantifying the distortions in relation to the position of CT
scanner components and the Aurora field generator, the optimal working field of view and recommended guidelines for
operation can be determined such that targeting inside human anatomy can be done with reasonable expectations of
desired performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.