Rib fractures occur in 10% of all trauma patients. Surgical fixation of the fractured ribs is usually performed to improve the respiratory mechanics and reduce pain. Rib fractures can be observed in x-ray and CT scans allowing for better surgical planning. However, translating the surgical plan to the operating table through mental mapping remains a challenging task. This is due to the lack of visual and tactile feedback in identifying the fractured ribs, especially when the patient is obese or when rib fractures are subtle. Using augmented reality (AR), a preoperative plan can be intraoperatively visualized in the field of view of the surgeon, allowing for a more accurate determination of the size and location of the incision for optimum access to the fractured ribs. This study aims to evaluate the use of AR for guidance in rib fracture procedures. To that end, an AR system using the HoloLens 2 was developed to visualize surgical incisions directly overlayed on the patient. The system tracks ArUco markers and aligns the preoperative model using landmarks-based registration. To evaluate the feasibility of the system, a torso phantom with registration landmarks was 3D scanned for preoperative planning of the incision lines. A user study with 13 participants was conducted to align the preoperative torso model and delineate the visualized incisions on the physical phantom. An independent optical tracking system was then used to evaluate the accuracy of the delineated incisions compared to the planned incisions. For a total of 39 delineated incisions, a mean distance error of 3.6±1.7 mm was achieved. The study shows the potential of using AR as an alternative to the traditional palpation approach for locating rib fractures, which has an error of up to 5 cm. Further assessment of the system in clinical settings is needed to demonstrate its clinical applicability.
Alzheimer’s disease (AD) is the most common cause of dementia. It is characterized by irreversible memory loss and degradation of cognitive skills. Amyloid PET imaging has been used in the diagnosis of AD to measure the amyloid burden in the brain. It is quantified by the Standard Uptake Value Ratio (SUVR). However, there is great variability in SUVR measurements when different scanner models are used. Therefore, standardization and harmonization is required for quantitative assessments of amyloid PET scans in a multi-center or longitudinal study. Conventionally, PET image harmonization has been tackled either by standardization protocols at the time of image reconstruction, or by applying a smoothing function to bring PET images to a common resolution using phantom data. In this work, we propose an automatic approach that aims to match the data distribution of PET images through unsupervised learning. To that end, we propose Smoothing-CycleGAN, a modified cycleGAN that uses a 3D smoothing kernel to learn the optimum Point Spread Function (PSF) for bringing PET images into a common spatial resolution. We validate our approach using two sets of datasets, and we analyze the SUVR agreement before and after PET image harmonization. Our results show that the PSF of PET images that have different spatial resolutions can be estimated automatically using Smoothing-cycleGAN, which results in better SUVR agreement after image translation.
Optical-based navigation systems are widely used in surgical interventions. However, despite their great utility and accuracy, they are expensive and require time and effort to setup for surgeries. Moreover, traditional navigation systems use 2D screens to display instrument positions causing the surgeons to look away from the operative field. Head mounted displays such as the Microsoft HoloLens may provide an attractive alternative for surgical navigation that also permits augmented reality visualization. The HoloLens is equipped with multiple sensors for tracking and scene understanding. Mono and stereo-vision in the HoloLens have been both reported to be used for marker tracking, but no extensive evaluation on accuracy has been done to compare the two approaches. The objective of our work is to investigate the tracking performance of various camera setups in the HoloLens, and to study the effect of the marker size, marker distance from camera, and camera resolution on marker locating accuracy. We also investigate the speed and stability of marker pose for each camera setup. The tracking approaches are evaluated using ArUco markers. Our results show that mono-vision is more accurate in marker locating than stereo-vision when high resolution is used. However, this comes at the expense of higher frame processing time. Alternatively, we propose a combined low-resolution mono-stereo tracking setup that outperforms each tracking approach individually and is comparable to high resolution mono tracking, with a mean translational error of 1.8±0.6mm for 10cm marker size at 50cm distance. We further discuss our findings and their implications for navigation in surgical interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.