A challenge in arthroscopic hip surgery is visualizing patient anatomy beneath the skin's surface, specifically in femoroacetabular impingement (FAI) surgeries. FAI is characterized by a bony deformity and hip pain that leads to osteoarthritis. Due to both poor visualization and under-resection of the bone deformity in FAI surgeries, patients often undergo revision surgeries and/or ongoing pain. During surgery, the patient's preoperative medical images are displayed on a monitor alongside the arthroscope's view. Rather than the surgeon mentally fusing the medical images of the hip anatomy onto the patient, augmented reality (AR) could integrate surgical visualization through a head-mounted display that overlays the patient's virtual anatomy onto the real world. This work presents the results from a preliminary user study, which assessed the functionality and accuracy of our AR live-resection tracking model via Microsoft Hololens 2 with a motion capture (MoCap) system. Our primary objective was to assess the initial accuracy of our live-resection tracking model in a simplified simulation with a physical object compared to our ability to track the resection in a virtual object. Our secondary objective was to obtain user feedback on the current AR system for resection tracking.
Purpose: Repetitive Transcranial Magnetic Stimulation (rTMS) is an important treatment option for medication resistant depression. It uses an electromagnetic coil that needs to be positioned accurately at a specific location and angle next to the head such that specific brain areas are stimulated. Existing image-guided neuronavigation systems allow accurate targeting but add cost, training and setup times, preventing their wide-spread use in the clinic. Mixed-reality neuronavigation can help mitigate these issues and thereby enable more widespread use of image-based neuronavigation by providing a much more intuitive and streamlined visualization of the target. A mixed-reality neuronavigation system requires two core functionalities: 1) tracking of the patient's head and 2) visualization of targeting-related information. Here we focus on the head tracking functionality and compare three different head tracking methods for a mixed-reality neuronavigation system. Methods: We integrated three head tracking methods into the mixed reality neuronavigation framework and measured their accuracy. Specifically, we experimented with (a) marker-based tracking with a mixed reality headset (optical see-through head-mounted display (OST-HMD)) camera, (b) marker-based tracking with a world-anchored camera and (c) markerless RGB-depth (RGB-D) tracking with a world-anchored camera. To measure the accuracy of each approach, we measured the distance between real-world and virtual target points on a mannequin head. Results: The mean tracking error for the initial head pose and the head rotated by 10° and 30° for the three methods respectively was: (a) 3.54±1.10 mm, 3.79±1.78 mm and 4.08±1.88 mm, (b) 3.97±1.41 mm, 6.01±2.51 mm and 6.84±3.48 mm, (c) 3.16±2.26 mm, 4.46±2.30 mm and 5.83±3.70 mm. Conclusion: For the initial head pose, all three methods achieved the required accuracy of < 5 mm for TMS treatment. For smaller head rotations of 10°, only the marker-based (a) and markerless method (c) delivered sufficient accuracy for TMS treatment. For larger head rotations of 30°, only the marker-based method (a) achieved sufficient accuracy. While the markerless method (c) did not provide sufficient accuracy for TMS at the larger head rotations, it offers significant advantages such as occlusion-handling and stability and could potentially meet the accuracy requirements with further methodological refinements.
Purpose: Rapid dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) enables the tracking of rapid contrast accumulation, which is an important indication for cancer angiogenesis. Conventional pharmacokinetic models focusing on evaluating microvascular perfusion have limited abilities in detecting these features. In this work, we explore the performance of a novel dispersion pharmacokinetic model in discriminating benign and malignant tumor tissues and compare that with conventional non-dispersion methods. Methods: According to the convective-dispersion equation, the microvascular architecture changes can be explained using dispersion parameters. The dispersion maps are estimated by fitting a modified local density random walk (mLDRW) dispersion model to the concentration-time curves (CTC) in a voxel-by-voxel level. Measurement of an arterial input function is no longer required. We compare the fitting performance of this model with three classic non-dispersion pharmacokinetic models (i.e., Tofts, extended Tofts and comprehensive 2 compartment exchange model (2CXM)) that are commonly used for tumor characterization. The performance in discriminating benign and malignant tumors for dispersion and non-dispersion parameter maps are compared using receiver operating characteristic curve (ROC). Evaluation study is performed on 60 tumors that are acquired from 37 patients. Results: The goodness-of-fit is significantly improved with mLDRW model. Comparing to non-dispersion parameter maps, the dispersion related parameter maps provide the highest area under the ROC (AUC) of 0.96 with a sensitivity of 84.7 and specificity of 90.5. Conclusion: In this work, we provide a new window to investigate the physiology of breast tumor microcirculation through the estimation of intravascular dispersion property. The dispersion related parameter demonstrates superior performance in discriminating benign and malignant tumors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.