The dentate nucleus (DN) is a gray matter structure deep in the cerebellum involved in motor coordination, sensory input integration, executive planning, language, and visuospatial function. The DN is an emerging biomarker of disease, informing studies that advance pathophysiologic understanding of neurodegenerative and related disorders. The main challenge in defining the DN radiologically is that, like many deep gray matter structures, it has poor contrast in T1-weighted magnetic resonance (MR) images and therefore requires specialized MR acquisitions for visualization. Manual tracing of the DN across multiple acquisitions is resource-intensive and does not scale well to large datasets. We describe a technique that automatically segments the DN using deep learning (DL) on common imaging sequences, such as T1-weighted, T2-weighted, and diffusion MR imaging. We trained a DL algorithm that can automatically delineate the DN and provide an estimate of its volume. The automatic segmentation achieved higher agreement to the manual labels compared to template registration, which is the current common practice in DN segmentation or multiatlas segmentation of manual labels. Across all sequences, the FA maps achieved the highest mean Dice similarity coefficient (DSC) of 0.83 compared to T1 imaging (DSC = 0.76), T2 imaging (DSC = 0.79), or a multisequence approach (DSC = 0.80). A single atlas registration approach using the spatially unbiased atlas template of the cerebellum and brainstem template achieved a DSC of 0.23, and multi-atlas segmentation achieved a DSC of 0.33. Overall, we propose a method of delineating the DN on clinical imaging that can reproduce manual labels with higher accuracy than current atlas-based tools.
Conventional optical tracking systems use cameras sensitive to near-infrared (NIR) light and NIR illuminated/active-illuminating markers to localize instrumentation and the patient in the operating room (OR) physical space. This technology is widely used within the neurosurgical theater and is a staple in the standard of care for craniotomy planning. To accomplish, planning is largely conducted at the time of the procedure in the OR with the patient in a fixed head orientation. We propose a framework to achieve this in the OR without conventional tracking technology, i.e., a “trackerless” approach. Briefly, we investigate an extension of the 3D Slicer which combines surgical planning and craniotomy designation. While taking advantage of the well-developed 3D Slicer platform, we implement advanced features to aid the neurosurgeon in planning the location of the anticipated craniotomy relative to the preoperatively imaged tumor in a physical-to-virtual setup, and then subsequently aid the true physical procedure by correlating that physical-to-virtual plan with an intraoperative magnetic resonance imaging-to-physical registered field-of-view display. These steps are done such that the craniotomy can be designated without the use of a conventional optical tracking technology. To test this approach, four experienced neurosurgeons performed experiments on five different surgical cases using our 3D Slicer module as well as the conventional procedure for comparison. The results suggest that our planning system provides a simple, cost-efficient, and reliable solution for surgical planning and delivery without the use of conventional tracking technologies. We hypothesize that the combination of this craniotomy planning approach and our past developments in cortical surface registration and deformation tracking using stereo-pair data from the surgical microscope may provide a fundamental realization of an integrated trackerless surgical guidance platform.
View contact details