Shift of brain tissues during surgical procedures affects the precision of image-guided neurosurgery (IGNS). To improve the accuracy of the alignment between the patient and images, finite element model-based non-rigid registration methods have been investigated. The best prior estimate (BPE), the forced displacement method (FDM), the weighted basis solutions (WBS), and the adjoint equations method (AEM) are versions of this approach that have appeared in the literature. In this paper, we present a quantitative comparison study on a set of three patient cases. Three-dimensional displacement data from the surface and subsurface was extracted using the intra-operative ultrasound (iUS) and intraoperative stereovision (iSV). These data are then used as the "ground truth" in a quantitative study to evaluate the accuracy of estimates produced by the finite element models. Different types of clinical cases are presented, including distension and combination of sagging and distension. In each case, a comparison of the performance is made with the four methods. The AEM method which recovered 26-62% of surface brain motion and 20-43% of the subsurface deformation, produced the best fit between the measured data and the model estimates.
Brain shift during neurosurgery currently limits the effectiveness of stereotactic guidance systems that rely on preoperative image modalities like magnetic resonance (MR). The authors propose a process for quantifying intraoperative brain shift using spatially-tracked freehand intraoperative ultrasound (iUS). First, one segments a distinct feature from the preoperative MR (tumor, ventricle, cyst, or falx) and extracts a faceted surface using the marching cubes algorithm. Planar contours are then semi-automatically segmented from two sets of iUS b-planes obtained (a) prior to the dural opening and (b) after the dural opening. These two sets of contours are reconstructed in the reference frame of the MR, composing two distinct sparsely-sampled surface descriptions of the same feature segmented from MR. Using the Iterative Closest Point (ICP) algorithm one obtains discrete estimates of the feature deformation performing point-to-surface matching. Vector subtraction of the matched points then can be used as sparse deformation data inputs for inverse biomechanical brain tissue models. The results of these simulations are then used to modify the pre-operative MR to account for intraoperative changes. The proposed process has undergone preliminary evaluations in a phantom study and was applied to data from two clinical cases. In the phantom study, the process recovered controlled deformations with an RMS error of 1.1 mm. These results also suggest that clinical accuracy would be on the order of 1-2mm. This finding is consistent with prior work by the Dartmouth Image-Guided Neurosurgery (IGNS) group. In the clinical cases, the deformations obtained were used to produce qualitatively reasonable updated guidance volumes.
KEYWORDS: Data modeling, Brain, Magnetic resonance imaging, Motion models, Ultrasonography, Tissues, Tumors, Error analysis, 3D modeling, Systems modeling
Model-based approaches to correct for brain shift in image-guided neurosurgery systems have shown promising results. Despite the initial success of such methods, the complex mechanical behavior of the brain under surgical loads makes it likely that model predictions could be improved with the incorporation of real-time measurements of tissue shift in the OR. To this end, an inverse method has been developed using sparse data and model constraints to generate estimates of brain motion. Based on methodology from ocean circulation modeling, this computational scheme combines estimates of statistical error in forcing conditions with a least squares minimization of the model-data misfit to directly estimate the full displacement solution. The method is tested on a 2D simulation based on clinical data in which ultrasound images were co-registered to the preoperative MR stack. Calculations from the 2D forward model are used as the 'gold standard' to which the inverse scheme is compared. Initial results are promising, though further study is needed to ascertain its value in 3D shift estimates.
KEYWORDS: Brain, Surgery, Ultrasonography, Neuroimaging, Magnetic resonance imaging, Data modeling, Tissues, Human-machine interfaces, Finite element methods, Head
Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.
Image guided neurosurgery systems rely on rigid registration of the brain to preoperative images, not taking into account the displacement of brain tissue during surgery. Co-registered ultrasound appears to be a promising means of detecting tissue shift in the operating room. Although the use of ultrasound images alone may be insufficient for adequately describing intraoperative brain deformation, they could be used in conjunction with a computational model to predict full volume deformation. We rigorously test the assumption that co-registered ultrasound is an accurate source of sparse displacement data. Our co-registered ultrasound system is studied in both clinical applications as well as in a series of porcine experiments. Qualitative analysis of patient data indicates that ultrasound correctly depicts displaced tissue. The porcine studies demonstrate that features from co-registered ultrasound and CT or MR images are properly aligned to within approximately 1.7 mm. Tissue tracking in pigs suggests that the magnitude of displaced tissue may be more accurately predicted than the actual location of features. We conclude that co-registered ultrasound is capable of detecting brain tissue shift, and that incorporating displacement data into computational model appears feasible.
Microscope-based image-guided neurosurgery can be divided into three steps: calibration of the microscope optics; registration of the pre-operative images to the operating space; and tracking of the patient and microscope over time. Critical to this overall system is the temporal retention of accurate camera calibration. Classic calibration algorithms are routinely employed to find both intrinsic and extrinsic camera parameters. The accuracy of this calibration, however, is quickly compromised due to the complexity of the operating room, the long duration of a surgical procedure, and the inaccuracies in the tracking system. To compensate for the changing conditions, we have developed an adaptive procedure which responds to accruing registration error. The approach utilizes miniature fiducial markers implanted on the bony rim of the craniotomy site, which remain in the field of view of the operating microscope. A simple error function that enforces the registration of the known fiducial markers is used to update the extrinsic camera parameters. The error function is minimized using a gradient descent. This correction procedure reduces RMS registration errors for cortical features on the surface of the brain by an average of 72%, or 1.5 mm. These errors were reduced to less than 0.6 mm after each correction during the entire surgical procedure.
During neurosurgery, intraoperative brain shift comprises the accuracy of image guided techniques. We are investigating the use of ultrasound as an inexpensive means of gaining 3D data on subsurface tissue deformation. Measured displacement of easily recognizable features can then be used to drive a computational model for a description of full volume deformation. Subsurface features identified in the ultrasound image plane are located in world space using a 3D optical tracking system mounted to the ultrasound scanhead. This tracking system is also co- registered with the model space derived from preoperative MR, allowing the ultrasound image plane to e reconstructed in MR space, and the corresponding oblique MR slice to be obtained. The ultrasound image tracker has been calibrated with a novel strategy involving multiple scans of N-shaped wires positioned at several depths. Mean calibration error is found to range from 0.43 mm to 0.76 mm in plane and 0.86 mm to 1.51 mm out of plane for the two ultrasound image scales calibrated. Improved ultrasound calibration and co- registration facilitates subsurface feature tracking as a first step in obtaining model constraints for intraoperative image compensation. Estimation of and compensation for brain shift through the low cost, efficient technology of ultrasound, combined with computational modeling is feasible and appears to be a promising means of improving intraoperative image guided techniques.
Distortion between the operating field and preoperative images increases as image-guided surgery progresses. Retraction is a typical early-stage event that causes significant tissue deformation, which can be modeled as an intraoperative compensation strategy. This study compares the predictive power of incremental versus single-step retraction models in the porcine brain. In vivo porcine experiments were conducted that involved implanting markers in the brain whose trajectories were tracked in CT scans following known incremental deformations induced by a retractor blade placed interhemispherically. Studies were performed using a 3D consolidation model of brain deformation to investigate the relative predictive benefits of incremental versus single-step retraction simulations. The results show that both models capture greater than 75% of tissue loading due to retraction. We have found that the incremental approach outperforms the single-step method with an average improvement of 1.5%-3%. More importantly it also preferentially recovers the directionality of movement, providing better correspondence to intraoperative surgical events. A new incremental approach to tissue retraction has been developed and shown to improve data-model match in retraction experiments in the porcine brain. Incremental retraction modeling is an improvement over previous single- step models, which does not incur additional computational to overhead. Results in the porcine brain show that even when the overall displacement magnitudes between the two models are similar, directional trends of the displacement field are often significantly improved with the incremental method.
Compensation for intraoperative tissue motion in the registration of preoperative image volumes with the OR is important for improving the utility of image guidance in the neurosurgery setting. Model-based strategies for neuroimage compensation are appealing because they offer the prospect of retaining the high-resolution preoperative information without the expense and complexity associated with full volume intraoperative scanning. Further, they present opportunities to integrate incomplete or sparse, partial volume sampling of the surgical field as a guide for full volume estimation and subsequent compensation of the preoperative images. While potentially promising, there are a number of unresolved difficulties associated with deploying computational models for this purpose. For example, to date they have only been successful in representing the tissue motion that occurs during the earliest stages of neurosurgical intervention and have not addressed the later more complex events of tissue retraction and resection. IN this paper, we develop a mathematical framework for implementing retraction and resection within the context of finite element modeling of brain deformation using the equations of linear consolidation. Specifically, we discuss the critical boundary conditions applied at the new tissue surfaces created by these respective interventions and demonstrate the ability to model compound events where updated image volumes are generated in succession to represent the significant occurrences of tissue deformation which take place during the course of surgery. In this regard, we show image compensation for an actual OR case involving the implantation of a subdural electrode array for recording neural activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.