For those experiencing severe-to-profound sensorineural hearing loss, the cochlear implant (CI) is the preferred treatment. Augmented reality (AR) aided surgery can potentially improve CI procedures and hearing outcomes. Typically, AR solutions for image-guided surgery rely on optical tracking systems to register pre-operative planning information to the display so that hidden anatomy or other important information can be overlayed and co-registered with the view of the surgical scene. In this paper, our goal is to develop a method that permits direct 2D-to-3D registration of the microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment. Our proposed solution involves using surface mapping of a portion of the incus in surgical recordings and determining the pose of this structure relative to the surgical microscope by performing pose estimation via the perspective-n-point (PnP) algorithm. This registration can then be applied to pre-operative segmentations of other anatomy-of-interest, as well as the planned electrode insertion trajectory to co-register this information for the AR display. Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively. Our proposed method has the potential to be applicable and generalized to other surgical procedures while only needing a monocular microscope during intra-operation.
Selective amygdalohippocampectomy (SelAH) for mesial temporal lobe epilepsy (mTLE) involves the resection of the anterior hippocampus and the amygdala. A recent study related to SelAH reports that among 168 patients for whom two-year Engel outcomes data were available, 73% had Engel I outcomes (free of disabling seizure); 16.6% had Engel II outcomes (rare disabling seizures); 4.7% had Engel III outcomes (worthwhile improvement); and 5.3% had Engel IV outcomes (no worthwhile improvement). Success rate among sites also varies greatly. Possible explanations for variability in outcomes are the resected volume and/or the subregion of the hippocampus and amygdala that have been resected. To explore this hypothesis, the accurate segmentation of the resected cavity needs to be performed on a large scale. This is, however, a difficult and time-consuming task that requires expertise. Here we explore using a nnUNET to perform the task. Inspired by Youngeun, a level set loss is used in addition to the original DICE and cross-entropy loss in nnUNET to capture the cavity boundaries better. We show that, even with a modest-sized training set (25 volumes), the median DICE value between automated and manual segmentations is 0.88, which suggests that the automatic and accurate segmentation of the resection cavity is achievable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.