Abdominal aortic aneurysms are a common disease of the aorta which are treated minimally invasive in about 33 % of the cases. Treatment is done by placing a stent graft in the aorta to prevent the aneurysm from growing. Guidance during the procedure is facilitated by fluoroscopic imaging. Unfortunately, due to low soft tissue contrast in X-ray images, the aorta itself is not visible without the application of contrast agent. To overcome this issue, advanced techniques allow to segment the aorta from pre-operative data, such as CT or MRI. Overlay images are then subsequently rendered from a mesh representation of the segmentation and fused to the live fluoroscopic images with the aim of improving the visibility of the aorta during the procedure. The current overlay images typically use forward projections of the mesh representation. This fusion technique shows deficiencies in both the 3-D information of the overlay and the visibility of the fluoroscopic image underneath. We present a novel approach to improve the visualization of the overlay images using non-photorealistic rendering techniques. Our method preserves the visibility of the devices in the fluoroscopic images while, at the same time, providing 3-D information of the fused volume. The evaluation by clinical experts shows that our method is preferred over current state-of-the-art overlay techniques. We compared three visualization techniques to the standard visualization. Our silhouette approach was chosen by clinical experts with 67 %, clearly showing the superiority of our new approach.
Two-dimensional roadmapping is considered state-of-the-art in guidewire navigation during endovascular interventions.
This paper presents a methodology for extracting the guidewire from a sequence of 2-D roadmap
images in almost real time. The detected guidewire can be used to improve its visibility on noisy fluoroscopic
images or to do a back projection of the guidewire into a registered 3-D vessel tree. A lineness filter based on
the Hessian matrix is used to detect only those line structures in the image that lie within the vessel tree. Loose
wire fragments are properly linked by a novel connection method fulfilling clinical processing requirements. We
show that Dijkstra's algorithm can be applied to efficiently compute the optimal connection path. The entire
guidewire is finally approximated by a B-spline curve in a least-squares manner. The proposed method is both
integrated into a commercial clinical prototype and evaluated on five different patient data sets containing up to
249 frames per image series.
Nowadays, hepatic artery catheterizations are performed under live 2D X-ray fluoroscopy guidance, where the visualization of blood vessels requires the injection of contrast agent. The projection of a 3D static roadmap of the complex branches of the liver artery system onto 2D fluoroscopy images can aid catheter navigation and minimize the use of contrast agent. However, the presence of a significant hepatic motion due to patient's respiration necessitates a real-time
motion correction in order to align the projected vessels. The objective of our work is to introduce dynamic roadmaps into
clinical workflow for hepatic artery catheterizations and allow for continuous visualization of the vessels in 2D fluoroscopy
images without additional contrast injection. To this end, we propose a method for real-time estimation of the apparent displacement of the hepatic arteries in 2D flouroscopy images. Our approach approximates respiratory motion of hepatic arteries from the catheter motion in 2D fluoroscopy images. The proposed method consists of two main steps. First, a filtering is applied to 2D fluoroscopy images in order to enhance the catheter and reduce the noise level. Then, a part of the catheter is tracked in the filtered images using template matching. A dynamic template update strategy makes our method robust to deformations. The accuracy and robustness of the algorithm are demonstrated by experimental studies on 22 simulated and 4 clinical sequences containing 330 and 571 image frames, respectively.
In this paper, we propose a multi-modal non-rigid 2D-3D registration technique. This method allows
a non-rigid alignment of a patient pre-operatively computed tomography (CT)
to few intra operatively acquired fluoroscopic X-ray images obtained with a C-arm
system. This multi-modal approach is especially focused on the 3D
alignment of high contrast reconstructed volumes with intra-interventional low
contrast X-ray images in order to make use of up-to-date information for surgical guidance and other interventions. The key issue of non-rigid 2D-3D registration is how to define the distance
measure between high contrast 3D data and low contrast 2D projections.
In this work, we use algebraic reconstruction theory to handle
this problem. We modify the Euler-Lagrange equation by
introducing a new 3D force. This external force term is computed
from the residual of the algebraic reconstruction procedures. In the
multi-modal case we replace the residual between the digitally reconstructed
radiographs (DRR) and observed X-ray images with a statistical based distance measure. We integrate the algebraic reconstruction technique
into a variational registration framework, so that the 3D
displacement field is driven to minimize the reconstruction
distance between the volumetric data and its 2D projections using mutual
information (MI). The benefits of this 2D-3D registration approach are its
scalability in the number of used X-ray reference images and the proposed distance that can handle low contrast fluoroscopies as well.
Experimental results are presented on both artificial phantom and
3D C-arm CT images.
Breast cancer diagnosis may be improved by optical fluorescence imaging techniques in the near-infrared wavelength range. We have shown that the recently proposed space-space MUSIC (multiple signal classification) algorithm allows the 3-D localization of focal fluorophore-tagged lesions in a turbid medium from 2-D fluorescence data obtained from laser excitations at different positions. The data are assumed to be measured with two parallel planar sensor arrays on the top and bottom of the medium. The laser sources are integrated at different positions in one of the planes. The space-space data are arranged into an M×N matrix (M, number of sensors; N, number of excitation sources). A singular-value decomposition (SVD) of this matrix yields the detectable number of spot regions with linearly independent behavior with respect to the laser excitation positions and thus allows definition of a signal subspace. Matches between this signal subspace and data from model spots are tested at scanned points in a model medium viewed as the breast region under study. The locations of best matches are then considered the centers of gravity of focal lesions. The optical model used was unbounded and optically homogeneous. Nevertheless, simulated spots in bounded, inhomogeneous media modeling the breast could be localized accurately.
We present a novel method, space-space MUSIC (MUltiple SIgnal Classification), to localize three-dimensionally focal fluorophore-tagged lesions activated subsequently by different laser source posi-tions from multi-sensor fluorescence data obtained from a single measurement plane.
Matches between a signal subspace derived from the measured data and data from model spots allow 3D determination of the centers-of-gravity of fluorescence regions. Simulated spots in bounded, inho-mogeneous media could be localized accurately. The algorithm has shown to be robust against patient-dependent parameters, such as optical background parameters. The algorithm does also not consider medium boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.