Surgery is a crucial treatment for malignant brain tumors where gross total resection improves the prognosis. Tissue samples taken during surgery are either subject to a preliminary intraoperative histological analysis, or sent for a full pathological evaluation which can take days or weeks. Whereas a lengthy complete pathological analysis includes an array of techniques to be executed, a preliminary tissue analysis on frozen tissue is performed as quickly as possible (30-45 minutes on average) to provide fast feedback to the surgeon during the surgery. The surgeon uses the information to confirm that the resected tissue is indeed tumor and may, at least in theory, initiate repeated biopsies to help achieve gross total resection. However, due to the total turn-around time of the tissue inspection for repeated analyses, this approach may not be feasible during a single surgery. In this context, intraoperative image-guided techniques can improve the clinical workflow for tumor resection and improve outcome by aiding in the identification and removal of the malignant lesion. Hyperspectral imaging (HSI) is an optical imaging technique with the potential to extract combined spectral-spatial information. By exploiting HSI for human brain-tissue classification in 13 in-vivo hyperspectral images from 9 patients, a brain-tissue classifier is developed. The framework consists of a hybrid 3D-2D CNN-based approach and a band-selection step to enhance the capability of extracting both spectral and spatial information from the hyperspectral images. An overall accuracy of 77% was found when tumor, normal and hyper-vascularized tissue are classified, which clearly outperforms the state-of-the-art approaches (SVM, 2D-CNN). These results may open an attractive future perspective for intraoperative brain-tumor classification using HSI.
Pain or discomfort exposure during hospitalization of preterm infants has an adverse effect on brain development. Contactless monitoring has been considered to be a promising approach for detecting infant pain and discomfort moments continuously. In this study, our main objective is to develop an automated discomfort detection system based on video monitoring, allowing caregivers to provide timely and appropriate treatments. The system first employs the optical ow to estimate infant body motion trajectories across video frames. Following the movement estimation, Log Mel-spectrogram, Mel Frequency Cepstral Coefficients (MFCCs) and Spectral Subband Centroid Frequency (SSCF) features are calculated from the One-Dimensional (1D) motion signal. These features enable the representation of the 1D motion signals by Two-Dimensional (2D) time-frequency representations of the distribution of signal energy. Finally, deep Convolutional Neural Networks (CNNs) are applied on the 2D images for the binary - comfort/discomfort classification. The performance of the model is assessed using leave-one-infant- out cross-validation. Our algorithm was evaluated on a dataset containing 183 video segments recorded from 11 infants during 17 heel prick events, which is a pain stimulus associated with a routine care procedure. Experimental results showed an area under the receiver operating characteristic curve of 0.985 and an accuracy of 94.2%, which offers a promising possibility to deploy the proposed system in clinical practice.
Nowadays, 3D ultrasound (US) has been employed rapidly in medical intervention therapies, such as cardiac catheterization. To efficiently interpret 3D US images and localize the catheter during the surgery, an experienced sonographer is required. As a consequence, image-based catheter detection can be a benefit to sonographer to localize the instrument in the 3D US images timely. Conventionally, the 3D imaging methods are based on the Cartesian domain, which is limited by bandwidth and information lose when it is converted from the original acquisition space-Frustum domain. The exploration of catheter segmentation in Frustum space helps to reduce the computational cost and improve efficiency. In this paper, we present a catheter segmentation method in 3D Frustum image via a deep convolutional network (DCNN). To better describe 3D information and reduce the complexity of DCNN, cross-planes with spatial gaps are extracted for each voxel. Then, the cross-planes of the voxel are processed by the DCNN to distinguish it, whether it is a catheter voxel or not. To accelerate the prediction efficiency on whole US Frustum volume, a filter-based pre-selection is applied to reduce the computational cost of the DCNN. Based on experiments on the ex-vivo dataset, our proposed method can segment the catheter in Frustum images with 0.67 Dice score within 3 seconds, which indicates the possibility of real-time application.
In neurosurgery, technical solutions for visualizing the border between healthy brain and tumor tissue is of great value, since they enable the surgeon to achieve gross total resection while minimizing the risk of damage to eloquent areas. By using real-time non-ionizing imaging techniques, such as hyperspectral imaging (HSI), the spectral signature of the tissue is analyzed allowing tissue classification, thereby improving tumor boundary discrimination during surgery. More particularly, since infrared penetrates deeper in the tissue than visible light, the use of an imaging sensor sensitive to the near-infrared wavelength range would also allow the visualization of structures slightly beneath the tissue surface. This enables the visualization of tumors and vessel boundaries prior to surgery, thereby preventing the damaging of tissue structures. In this study, we investigate the use of Diffuse Reflectance Spectroscopy (DRS) and HSI for brain tissue classification, by extracting spectral features from the near infra-red range. The applied method for classification is the linear Support Vector Machine (SVM). The study is conducted on ex-vivo porcine brain tissue, which is analyzed and classified as either white or gray matter. The DRS combined with the proposed classification reaches a sensitivity and specificity of 96%, while HSI reaches a sensitivity of 95% and specificity of 93%. This feasibility study shows the potential of DRS and HSI for automated tissue classification, and serves as a fjrst step towards clinical use for tumor detection deeper inside the tissue.
Head and neck cancer (HNC) includes cancers in the oral/nasal cavity, pharynx, larynx, etc., and it is the sixth most common cancer worldwide. The principal treatment is surgical removal where a complete tumor resection is crucial to reduce the recurrence and mortality rate. Intraoperative tumor imaging enables surgeons to objectively visualize the malignant lesion to maximize the tumor removal with healthy safe margins. Hyperspectral imaging (HSI) is an emerging imaging modality for cancer detection, which can augment surgical tumor inspection, currently limited to subjective visual inspection. In this paper, we aim to investigate HSI for automated cancer detection during image-guided surgery, because it can provide quantitative information about light interaction with biological tissues and exploit the potential for malignant tissue discrimination. The proposed solution forms a novel framework for automated tongue-cancer detection, explicitly exploiting HSI, which particularly uses the spectral variations in specific bands describing the cancerous tissue properties. The method follows a machine-learning based classification, employing linear support vector machine (SVM), and offers a superior sensitivity and a significant decrease in computation time. The model evaluation is on 7 ex-vivo specimens of squamous cell carcinoma of the tongue, with known histology. The HSI combined with the proposed classification reaches a sensitivity of 94%, specificity of 68% and area under the curve (AUC) of 92%. This feasibility study paves the way for introducing HSI as a non-invasive imaging aid for cancer detection and increase of the effectiveness of surgical oncology.
The use of pre-operative CT and MR images for navigation during endo-nasal skull-base endoscopic surgery is a well-established procedure in clinical practice. Fusion of CT and MR images on the endoscopic view can offer an additional advantage by directly overlaying surgical-planning information in the surgical view. Fusion of intraoperative images, such as cone beam computed tomography (CBCT), represents a step forward since these images can also account for intra-operative anatomical changes. In this work, we present a method for intra-operative CBCT image fusion on the endoscopic view for endo-nasal skull-base surgery, implemented on the Philips surgical navigation system. This is the first study which utilizes an optical tracking system (OTS) embedded in the flat-panel detector of the C-arm for endoscopic-image augmentation. In our method the OTS, co-registered in the same CBCT coordinate system, is used for tracking the endoscope. Accuracy in CBCT image registration in the endoscopic view is studied using a calibration board. Image fusion is tested in a realistic surgical scenario by using a skull phantom and inserts that mimic critical structures at the skull base. Overall performances tested on the skull phantom show a high accuracy in tracking the endoscope and registration of CBCT on endoscopic view. It can be concluded that the implemented system show potential for usage in endo-nasal skull-base surgery.
Ultrasound (US) has been increasingly used during interventions, such as cardiac catheterization. To accurately identify the catheter inside US images, extra training for physicians and sonographers is needed. As a consequence, automated segmentation of the catheter in US images and optimized presentation viewing to the physician can be beneficial to accelerate the efficiency and safety of interventions and improve their outcome. For cardiac catheterization, a three-dimensional (3-D) US image is potentially attractive because of no radiation modality and richer spatial information. However, due to a limited spatial resolution of 3-D cardiac US and complex anatomical structures inside the heart, image-based catheter segmentation is challenging. We propose a cardiac catheter segmentation method in 3-D US data through image processing techniques. Our method first applies a voxel-based classification through newly designed multiscale and multidefinition features, which provide a robust catheter voxel segmentation in 3-D US. Second, a modified catheter model fitting is applied to segment the curved catheter in 3-D US images. The proposed method is validated with extensive experiments, using different in-vitro, ex-vivo, and in-vivo datasets. The proposed method can segment the catheter within an average tip-point error that is smaller than the catheter diameter (1.9 mm) in the volumetric images. Based on automated catheter segmentation and combined with optimal viewing, physicians do not have to interpret US images and can focus on the procedure itself to improve the quality of cardiac intervention.
The usage of three-dimensional ultrasound (3D US) during image-guided interventions for e.g. cardiac catheterization has increased recently. To accurately and consistently detect and track catheters or guidewires in the US image during the intervention, additional training of the sonographer or physician is needed. As a result, image-based catheter detection can be beneficial to the sonographer to interpret the position and orientation of a catheter in the 3D US volume. However, due to the limited spatial resolution of 3D cardiac US and complex anatomical structures inside the heart, image-based catheter detection is challenging. In this paper, we study 3D image features for image-based catheter detection using supervised learning methods. To better describe the catheter in 3D US, we extend the Frangi vesselness feature into a multi-scale Objectness feature and a Hessian element feature, which extract more discriminative information about catheter voxels in a 3D US volume. In addition, we introduce a multi-scale statistical 3D feature to enrich and enhance the information for voxel-based classification. Extensive experiments on several in-vitro and ex-vivo datasets show that our proposed features improve the precision to at least 69% when compared to the traditional multi-scale Frangi features (from 45% to 76% at a high recall rate 75%). As for clinical application, the high accuracy of voxel-based classification enables more robust catheter detection in complex anatomical structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.