Tissue classification in surgical workflows between healthy and tumoral regions remains challenging both during and post-surgery. The current standard practice consists of taking small biopsies directly after tumor resection and sending them to pathologists for an intraoperative margin assessment, which is time-consuming and error prone due to the necessarily limited size and number of samples. Then, after the surgery is completed, the resected tumor is sent to the pathology lab, where its type and grading are further confirmed. The present workflow is prone to inaccuracies and particularly difficult when the sample is resected in several pieces. Therefore, an intraoperative tissue classification technology is highly sought-after for a simplified surgical workflow and better patient outcome. Our work aims at using hyperspectral images (HSI) for contact- and tracer-free tissue differentiation. We introduce a deep learning-based algorithm for the classification of tissue type that is based on spectral information and can be applied simultaneously to the whole sample. We illustrate the performance of our method on ex vivo head and neck squamous cell cancer samples. The proposed algorithm can differentiate between three main classes: background, tumor, and healthy tissues. Our experiments first assess the generalization of the neural network on data from unseen cases. We then determine the minimal number of training examples needed to cover the variety of tissue spectral appearances seen in the clinical dataset. We evaluate the influence of the delay between resection and start of image acquisition on the quality of the recorded HSI and the prediction. Qualitative and quantitative evaluations support the applicability of hyperspectral imaging for tissue classification and demonstrate an agreement between surgeon annotations and neural network predictions in most test cases.
KEYWORDS: Deep learning, Tumors, Surgery, Neural networks, Hyperspectral imaging, RGB color model, Tissues, Cameras, Brain, Real time optical diagnostics
Surgery for gliomas (intrinsic brain tumors), especially when low-grade, is challenging due to the infiltrative nature of the lesion. Currently, no real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors. While marker-based methods exist for the high-grade glioma case, there is no convenient solution available for the low-grade case; thus, marker-free optical techniques represent an attractive option. Although RGB imaging is a standard tool in surgical microscopes, it does not contain sufficient information for tissue differentiation. We leverage the richer information from hyperspectral imaging (HSI), acquired with a snapscan camera in the 468 − 787 nm range, coupled to a surgical microscope, to build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance. However, the main limitation of the HSI snapscan camera is the image acquisition time, limiting its widespread deployment in the operation theater. Here, we investigate the effect of HSI channel reduction and pre-selection to scope the design space for the development of cheaper and faster sensors. Neural networks are used to identify the most important spectral channels for tumor tissue differentiation, optimizing the trade-off between the number of channels and precision to enable real-time intra-surgical application. We evaluate the performance of our method on a clinical dataset that was acquired during surgery on five patients. By demonstrating the possibility to efficiently detect low-grade glioma, these results can lead to better cancer resection demarcations, potentially improving treatment effectiveness and patient outcome.
Label-free tissue identification is the new frontier of image guided surgery. One of the most promising modalities is hyperspectral imaging (HSI). Until now, the use of HSI has, however, been limited due to the challenges of integration into the existing clinical workflow. Research to reduce the implementation effort and simplifying the clinical approval procedure is ongoing, especially for the acquisition of feasibility datasets to evaluate HSI methods for specific clinical applications. Here, we successfully demonstrate how an HSI system can interface with a clinically approved surgical microscope making use of the microscope’s existing optics. We outline the HSI system adaptations, the data pre-processing methods, perform a spectral and functional system level validation and integration into the clinical workflow. Data were acquired using an imec snapscan VNIR 150 camera enabling hyperspectral measurement in 150 channels in the 470-900 nm range, assembled on a ZEISS OPMI Pentero 900 surgical microscope. The spectral range of the camera was adapted to match the intrinsic illumination of the microscope resulting in 104 channels in the range of 470-787 nm. The system’s spectral performance was validated using reflectance wavelength calibration standards. We integrated the HSI system into the clinical workflow of a brain surgery, specifically for resections of low-grade gliomas (LGG). During the study, but out of scope of this paper, the acquired dataset was used to train an AI algorithm to successfully detect LGG in unseen data. Furthermore, dominant spectral channels were identified enabling the future development of a real-time surgical guidance system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.