Purpose3D transesophageal echocardiography (TEE) has become an important modality for pre- and peri-operative imaging of valvular heart disease. TEE can give excellent visualization of valve morphology in 3D rendering. As a convention, 3D TEE images are reformatted in three standard views. We describe a method for automatic calculation of parameters needed to define the standard views from 3D TEE images using no manual input.ApproachAn algorithm was designed to find the center of the mitral valve and the left ventricular outflow tract (OT). These parameters defined the three-chamber view. The problem was modeled as a state estimation problem in which a 3D model was deformed based on shape priors and edge detection using a Kalman filter. This algorithm is capable of running in real time after initialization.ResultsThe algorithm was validated by comparing the automatic alignments of 106 TEE images against manually placed landmarks. The median error for determining the mitral valve center was 7.1 mm, and the median error for determining the left ventricular OT orientation was 13.5 deg.ConclusionThe algorithm is an accurate tool for automating the process of finding standard views for TEE images of the mitral valve.
KEYWORDS: 3D modeling, Ultrasonography, Data modeling, Image segmentation, 3D image processing, Heart, Solid modeling, Image processing, Filtering (signal processing), OSLO
Purpose: In recent years, there has been increased clinical interest in the right ventricle (RV) of the heart. RV dysfunction is an important prognostic marker for several cardiac diseases. Accurate modeling of the RV shape is important for estimating the performance. We have created computationally effective models that allow for accurate estimation of the RV shape.
Approach: Previous approaches to cardiac shape modeling, including modeling the RV geometry, has used Doo–Sabin surfaces. Doo–Sabin surfaces allow effective computation and adapt to smooth, organic surfaces. However, they struggle with modeling sharp corners or ridges without many control nodes. We modified the Doo–Sabin surface to allow for sharpness using weighting of vertices and edges instead. This was done in two different ways. For validation, we compared the standard Doo–Sabin versus the sharp Doo–Sabin models in modeling the RV shape of 16 cardiac ultrasound images, against a ground truth manually drawn by a cardiologist. A Kalman filter fitted the models to the ultrasound images, and the difference between the volume of the model and the ground truth was measured.
Results: The two modified Doo–Sabin models both outperformed the standard Doo–Sabin model in modeling the RV. On average, the regular Doo–Sabin had an 8-ml error in volume, whereas the sharp models had 7- and 6-ml error, respectively.
Conclusions: Compared with the standard Doo–Sabin, the modified Doo–Sabin models can adapt to a larger variety of surfaces while still being compact models. They were more accurate on modeling the RV shape and could have uses elsewhere.
Accurate modelling of the right ventricle of the human heart is important for both diagnosis and treatment planning. The right ventricle (RV) has a compound convex-concave shape with several sharp edges. While the RV has previously been modeled using the Doo-Sabin method, these models require several extra control nodes to accurately reproduce the relatively sharp edges. The current paper proposes a modified Doo-Sabin method which introduces weighting of vertices and edges rather than extra nodes to control sharpness. This work compares standard vs sharp Doo-Sabin models on modeling the RV from 16 3D ultrasound scans, compared to a ground truth mesh model manually drawn by a cardiologist. The modified, sharp Doo-Sabin method came closer to the ground truth RV model in 11 out of 16 cases and on average showed an 11.54 % improvement.
Treatment decision for coronary artery disease (CAD) is based on both morphological and functional information. Image fusion of coronary computed tomography angiography (CCTA) and three-dimensional echocardiography (3DE) could combine morphology and function into a single image to facilitate diagnosis. Three semiautomatic feature-based methods for CCTA/3DE registration were implemented and applied on CAD patients. Methods were verified and compared using landmarks manually identified by a cardiologist. All methods were found feasible for CCTA/3DE fusion.
With the advancement of three-dimensional (3-D) real-time echocardiography in recent years, automatic creation of patient specific geometric models is becoming feasible and important in clinical decision making. However, the vast majority of echocardiographic segmentation methods presented in the literature focus on the left ventricle (LV) endocardial border, leaving segmentation of the right ventricle (RV) a largely unexplored problem, despite the increasing recognition of the RV’s role in cardiovascular disease. We present a method for coupled segmentation of the endo- and epicardial borders of both the LV and RV in 3-D ultrasound images. To solve the segmentation problem, we propose an extension of a successful state-estimation segmentation framework with a geometrical representation of coupled surfaces, as well as the introduction of myocardial incompressibility to regularize the segmentation. The method was validated against manual measurements and segmentations in images of 16 patients. Mean absolute distances of 2.8±0.4 mm, 3.2±0.7 mm, and 3.1±0.5 mm between the proposed and reference segmentations were observed for the LV endocardium, RV endocardium, and LV epicardium surfaces, respectively. The method was computationally efficient, with a computation time of 2.1±0.4 s.
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for
image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D-
TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization
of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming
to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi
filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard
tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15
FPS was achieved.
KEYWORDS: Image registration, 3D image processing, In vivo imaging, Ultrasonography, Echocardiography, Heart, Transform theory, Data processing, MATLAB, Distance measurement
The use of three-dimensional (3-D) echocardiography is limited by signal dropouts and narrow field of view. Data compounding is proposed as a solution to overcome these limitations by combining multiple 3-D recordings to form a wide field of view. The first step of the solution requires registration between the recordings both in the spatial and temporal dimension for dynamic organs such as the heart. Accurate registration between the individual echo recordings is crucial for the quality of compounded volumes. A temporal registration method based on a piecewise one-dimensional cubic B-spline in combination with multiscale iterative Farnebäck optic flow method for spatial registration was described. The temporal registration method was validated on in vivo data sets with annotated timing of mitral valve opening. The spatial registration method was validated using in vivo data and compared to registration with Procrustes analysis using manual contouring as a benchmark. The spatial accuracy was assessed in terms of mean of absolute distance and Hausdorff distance between the left ventricular contours. The results showed that the temporal registration accuracy is in the range of half the time resolution of the echo recordings and the achieved spatial accuracy of the proposed method is comparable to manual registration.
Registration of multiple 3D ultrasound sectors in order to provide an extended field of view is important for the appreciation of larger anatomical structures at high spatial and temporal resolution. In this paper, we present a method for fully automatic spatio-temporal registration between two partially overlapping 3D ultrasound sequences. The temporal alignment is solved by aligning the normalized cross correlation-over-time curves of the sequences. For the spatial alignment, corresponding 3D Scale Invariant Feature Transform (SIFT) features are extracted from all frames of both sequences independently of the temporal alignment. A rigid transform is then calculated by least squares minimization in combination with random sample consensus. The method is applied to 16 echocardiographic sequences of the left and right ventricles and evaluated against manually annotated temporal events and spatial anatomical landmarks. The mean distances between manually identified landmarks in the left and right ventricles after automatic registration were (mean±SD) 4.3±1.2 mm compared to a reference error of 2.8 ± 0.6 mm with manual registration. For the temporal alignment, the absolute errors in valvular event times were 14.4 ± 11.6 ms for Aortic Valve (AV) opening, 18.6 ± 16.0 ms for AV closing, and 34.6 ± 26.4 ms for mitral valve opening, compared to a mean inter-frame time of 29 ms.
In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of a priori anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03±0.6 mm). The AV plane was detected with an accuracy of −0.6±1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean ±1.96 SD): 0.4±5.3 ml, 2.1±12.6 ml, and 1.5±7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.
KEYWORDS: Image registration, Echocardiography, In vivo imaging, Ultrasonography, Image analysis, Automatic alignment, Rigid registration, Time series analysis, Error analysis, 3D image processing, Video, Cardiovascular magnetic resonance imaging, Electroluminescent displays, Heart
Temporal alignment of echocardiographic sequences enables fair comparisons of multiple cardiac sequences by showing corresponding frames at given time points in the cardiac cycle. It is also essential for spatial registration of echo volumes where several acquisitions are combined for enhancement of image quality or forming larger field of view. In this study, three different image-based temporal alignment methods were investigated. First, a method based on dynamic time warping (DTW). Second, a spline-based method that optimized the similarity between temporal characteristic curves of the cardiac cycle using 1D cubic B-spline interpolation. Third, a method based on the spline-based method with piecewise modification. These methods were tested on in-vivo data sets of 19 echo sequences. For each sequence, the mitral valve opening (MVO) time was manually annotated. The results showed that the average MVO timing error for all methods are well under the time resolution of the sequences.
Recent studies show that the response rate to cardiac resynchronization therapy (CRT) could be improved if the left ventricle (LV) is paced at the site of the latest mechanical activation, but away from the myocardial scar. A prototype system for CRT lead placement guidance that combines LV functional information from ultrasound with live x-ray fluoroscopy was developed. Two mean anatomical models, each containing LV epi-, LV endo- and right ventricle endocardial surfaces, were computed from a database of 33 heart failure patients as a substitute for a patient-specific model. The sphericity index was used to divide the observed population into two groups. The distance between the mean and the patient-specific models was determined using a signed distance field metric (reported in mm). The average error values for LV epicardium were −0.4±4.6 and for LV endocardium were −0.3±4.4. The validity of using average LV models for a CRT procedure was tested by simulating coronary vein selection in a group of 15 CRT candidates. The probability of selecting the same coronary branch, when basing the selection on the average model compared to a patient-specific model, was estimated to be 95.3±2.9%. This was found to be clinically acceptable.
We present a 3D extension and validation of an intra-operative registration framework that accommodates
tissue resection. The framework is based on the bijective Demons method, but instead of regularizing with
the traditional Gaussian smoother, we apply an anisotropic diffusion filter with the resection modeled as a
diffusion sink. The diffusion sink prevents unwanted Demon forces that originates from the resected area from
diffusing into the surrounding area. Another attractive property of the diffusion sink is the resulting continuous
deformation field across the diffusion sink boundary, which allows us to move the boundary of the diffusion
sink without changing values in the deformation field. The area of resection is estimated by a level-set method
evolving in the space of image intensity disagreements in the intra-operative image domain. A product of using
the bijective Demons method is that we can also provide an accurate estimate of the resected tissue in the preoperative
image space. Validation of the proposed method was performed on a set of 25 synthetic images. Our experiments show a significant improvement in accommodating resection using the proposed method compared to two other Demons based methods.
The European research network "Augmented Reality in Surgery" (ARIS*ER) developed a system that supports
minimally invasive cardiac surgery based on augmented reality (AR) technology. The system supports the surgical team
during aortic endoclamping where a balloon catheter has to be positioned and kept in place within the aorta. The
presented system addresses the two biggest difficulties of the task: lack of visualization and difficulty in maneuvering the
catheter.
The system was developed using a user centered design methodology with medical doctors, engineers and human factor
specialists equally involved in all the development steps. The system was implemented using the AR framework
"Studierstube" developed at TU Graz and can be used to visualize in real-time the position of the balloon catheter inside
the aorta. The spatial position of the catheter is measured by a magnetic tracking system and superimposed on a 3D
model of the patient's thorax. The alignment is made with a rigid registration algorithm. Together with a user defined
target, the spatial position data drives an actuator which adjusts the position of the catheter in the initial placement and
corrects migrations during the surgery.
Two user studies with a silicon phantom show promising results regarding usefulness of the system: the users perform
the placement tasks faster and more accurately than with the current restricted visual support. Animal studies also
provided a first indication that the system brings additional value in the real clinical setting. This work represents a major
step towards safer and simpler minimally invasive cardiac surgery.
KEYWORDS: Image registration, Finite element methods, Tumors, Tissues, Brain, Chemical elements, Neuroimaging, Medical imaging, Magnetic resonance imaging, Surgery
Intra-operative imaging during neurosurgical procedures facilitates aggressive resections and potentially an increased
surgical success rate compared to the traditional approach of relying purely on pre-operative data.
However, acquisition of functional images like fMRI and DTI still have to be performed pre-operatively which
necessitates registration to map them to the intra-operative image space. We present an elastic FEM-based registration
algorithm which is tailored to register pre-operative to intra-operative images where a superficial tumor
has been resected. To restrict matching of the cortical brain surface of the pre-operative image with the resected
cavity in the intra-operative image, we define a weight function based on the "concavity" of the deformation
field. These weights are applied to the load vector which effectively restricts the unwanted image forces around
the resected area from matching the brain surface in the pre-operative image with the surface of the resected
cavity. Another novelty of the proposed method is an adaptive multi-level FEM grid. After convergence of the
algorithm on one level, the FEM grid is subdivided to add more degrees of freedom to the deformation around
areas with a bad match. We present results from applying the algorithm on both 2D synthetic and medical image
data and can show that the adaptivity of the grid both improves registration results and registration speed while
the inclusion of the weighting function improves the results in the presence of resected tissue.
The European research network "Augmented reality in Surgery" (ARIS*ER) developed a system that supports
percutaneous radio frequency ablation of liver tumors. The system provides interventionists, during placement and
insertion of the RFA needle, with information from pre-operative CT images and real-time tracking data. A visualization
tool has been designed that aims to support (1) exploration of the abdomen, (2) planning of needle trajectory and (3)
insertion of the needle in the most efficient way. This work describes a first evaluation of the system, where user
performances and feedback of two visualization concepts of the tool - needle view and user view - are compared. After
being introduced to the system, ten subjects performed three needle placements with both concepts. Task fulfillment rate,
time for completion of task, special incidences, accuracy of needle placement recorded and analyzed. The results show
ambiguous results with beneficial and less favorable effects on user performance and workload of both concepts. Effects
depend on characteristics of intra-operative tasks as well as on task complexities depending on tumor location. The
results give valuable input for the next design steps.
Minimally invasive therapy (MIT) is one of the most important trends in modern medicine. It includes a wide range of
therapies in videoscopic surgery and interventional radiology and is performed through small incisions. It reduces
hospital stay-time by allowing faster recovery and offers substantially improved cost-effectiveness for the hospital and
the society. However, the introduction of MIT has also led to new problems. The manipulation of structures within the
body through small incisions reduces dexterity and tactile feedback. It requires a different approach than conventional
surgical procedures, since eye-hand co-ordination is not based on direct vision, but more predominantly on image
guidance via endoscopes or radiological imaging modalities. ARIS*ER is a multidisciplinary consortium developing a
new generation of decision support tools for MIT by augmenting visual and sensorial feedback. We will present tools
based on novel concepts in visualization, robotics and haptics providing tailored solutions for a range of clinical
applications. Examples from radio-frequency ablation of liver-tumors, laparoscopic liver surgery and minimally invasive
cardiac surgery will be presented. Demonstrators were developed with the aim to provide a seamless workflow for the
clinical user conducting image-guided therapy.
KEYWORDS: Data modeling, Virtual reality, C++, Computer programming, Computing systems, Ultrasonography, Data storage, Augmented reality, Medical imaging, Systems modeling
Applications in the fields of virtual and augmented reality as well as image-guided medical applications make use of a
wide variety of hardware devices. Existing frameworks for interconnecting low-level devices and high-level application
programs do not exploit the full potential for processing events coming from arbitrary sources and are not easily generalizable.
In this paper, we will introduce a new multi-modal event processing methodology using dynamically-typed
event attributes for event passing between multiple devices and systems. The existing OpenTracker framework was
modified to incorporate a highly flexible and extensible event model, which can store data that is dynamically created
and arbitrarily typed at runtime. The main factors impacting the library's throughput were determined and the performance
was shown to be sufficient for most typical applications. Several sample applications were developed to take advantage
of the new dynamic event model provided by the library, thereby demonstrating its flexibility and expressive power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.