Paper
28 May 2001 Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom
Calvin R. Maurer Jr., Frank Sauer, Bo Hu, Benedicte Bascle, Bernhard Geiger, Fabian Wenzel, Filippo Recchi, Torsten Rohlfing, Christopher R. Brown, Robert J. Bakos, Robert J. Maciunas, Ali R. Bani-Hashemi
Author Affiliations +
Abstract
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
© (2001) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Calvin R. Maurer Jr., Frank Sauer, Bo Hu, Benedicte Bascle, Bernhard Geiger, Fabian Wenzel, Filippo Recchi, Torsten Rohlfing, Christopher R. Brown, Robert J. Bakos, Robert J. Maciunas, and Ali R. Bani-Hashemi "Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom", Proc. SPIE 4319, Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures, (28 May 2001); https://doi.org/10.1117/12.428086
Lens.org Logo
CITATIONS
Cited by 29 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Head

Video

Visualization

Computed tomography

Head-mounted displays

Brain

Back to Top