The target pose (position and orientation) of a spinal lesion can be determined using image registration of a pair of two-dimensional
(2D) x-ray projection images and a pre-treatment three-dimensional (3D) CT image. This is useful for
detecting, tracking and correcting for patient movement during image-guided spinal radiotherapy and radiosurgery. We
recently developed a fiducial-less 2D-3D spine image registration that localizes spinal targets by directly tracking
adjacent skeletal structures and thereby eliminates the need for implanted fiducials. Experience has shown this method
to be robust under a wide range of clinical circumstances. However, image artifacts in digitally reconstructed
radiographs (DRRs) that can be introduced by breathing during CT scanning or by other surrounding structures such as
ribs have the negative effects on image registration performance. Therefore, we present an approach to eliminate the
image artifacts in DRRs for a more robust registration. The spinal structures in the CT volume are approximately
segmented in a semi-automatic way and saved as a volume of interest (VOI). The DRRs are then generated within the
spine VOI for two orthogonal projections. During radiation treatment delivery, two X-ray images are acquired
simultaneously in near real time. Then each X-ray image is registered with the DRR image to obtain 2D local
displacements of skeletal structures. The 3D tumor position is calculated from the 2D displacements by 2D-to-3D back-projection
and geometric transformation. Experiments on clinical data were conducted to evaluate the performance of
the improved registration. The results showed that spine segmentation substantially improves image registration
performance.
Fiducial tracking is a widely used method in image guided procedures such as image guided radiosurgery and
radiotherapy. Our group has developed a new fiducial identification algorithm, concurrent Viterbi with association
(CVA) algorithm, based on a modified Hidden Markov Model (HMM), and reported our initial results previously. In this
paper, we present an extensive performance evaluation of this novel algorithm using phantom testing and clinical images
acquired during patient treatment. For a common three-fiducial case, the algorithm execution time is less than two
seconds. Testing with a collection of images from more than 35 patient treatments, with a total of more than 10000
image pairs, we find that the success rate of the new algorithm is better than 99%. In the tracking test using a phantom,
the phantom is moved to a variety of positions with translations up to 8 mm and rotations up to 4 degree. The new
algorithm correctly tracks the phantom motion, with an average translation error of less than 0.5 mm and rotation error
less than 0.5 degrees. These results demonstrate that the new algorithm is very efficient, robust, easy to use, and capable
of tracking fiducials in a large region of interest (ROI) at a very high success rate with high accuracy.
Generation of digitally reconstructed radiographs (DRR) is a critical part of 2D-3D image registration that is utilized in patient position alignment for image-guided radiotherapy and radiosurgery. The DRRs are generated from a pre-operative CT scan and used as the references to match the X-ray images for determining the change of patient position. Skeletal structures are the primary image features to facilitate the registration between the DRR and X-ray images. In this paper, we present a method to enhance skeletal features of spinal regions in DRRs. The attenuation coefficient at each voxel is first calculated by applying an exponential transformation of the original attenuation coefficient in the CT scan. This is a preprocessing step that is performed prior to DRR generation. The DRR is then generated by integrating the newly calculated attenuation coefficients along the ray that connects the X-ray source and the pixel in the DRR. Finally, the DRR is further enhanced using a weighted top-hat filter. During the entire process, because there is no original CT information lost, even the small skeletal features contributed by low intensity part of CT data are preserved in the enhanced DRRs. Experiments on clinical data were conducted to compare the image quality of DRRs with and without enhancement. The results showed that the image contrast of skeletal features in the enhanced DRRs is significantly improved. This method has potential to be applied for more accurate and robust 2D-3D image registration.
We have developed an automated skull tracking method to perform near real-time patient alignment and position correction during CyberKnife image-guided intracranial radiosurgery. Digitally reconstructed radiographs (DRRs) are first generated offline from a CT study before treatment, and are used as reference images for the patient position. Two orthogonal projection X-ray images are then acquired at the time of patient alignment or treatment. Multi-phase registration is used to register the DRRs with the X-ray images. The registration in each projection is carried out independently; the results are then combined and converted to a 3-D rigid transformation. The in-plane transformation and the out-of-plane rotations are estimated using different search methods including multi-resolution matching, steepest descent minimization and one-dimensional search. Two similarity measure methods, optimized pattern intensity and sum of squared difference (SSD), are applied at different search phases to optimize both accuracy and computation speed. Experiments on an anthropomorphic skull phantom showed that the tracking accuracy (RMS error) is better than 0.3 mm for each translation and better than 0.3 degree for each rotation, and the targeting accuracy (clinically relevant accuracy) tested with the CyberKnife system is better than 1 mm. The computation time required for the tracking algorithm is within a few seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.