Cancer is one of the leading causes of death, thereby, contributing to their quick diagnosis or treatment is of greatest importance. Nowadays, tumours are mainly diagnosed and graded histologically using biopsies. Since the images need to be sharp to distinguish biological structures, samples are thinly sliced (3-5 μm) to avoid scattering and contrast is obtained using highly absorbance dyes (e.g., Haematoxylin and Eosin (H&E)). RGB (Red-Green-Blue) cameras have been widely employed to acquire those images, while new approaches, such as Hyperspectral (HS) Imaging (HSI), have been arising to obtain a greater amount of spectral information from the samples. However, in order to have diffuse light for the HS cameras to capture it, the thickness of the sample should be bigger than the ones employed in conventional microscopy. This work aims to characterize the influence of tissue thickness of histology breast samples sectioned at 2 and 3 μm on their spectral signatures. Based on the H&E transmittance spectra peaks, HS images were segmented into three structures: stroma (eosin-stained), nuclei (haematoxylin-stained), and background (non-stained). Results show that, spatially, in 3 μm samples there are more cells imaged than in 2 μm samples. Moreover, spectrally, 3 μm samples proportionate higher spectral contrast than 2 μm samples due the greater interaction of light with tissue, denoting them as more suitable for microscopic HSI.
KEYWORDS: RGB color model, Tumors, Principal component analysis, Tissues, Cancer detection, Object detection, Visualization, Hyperspectral imaging, Data modeling
The current advances in Whole-Slide Imaging (WSI) scanners allow for more and better visualization of histological slides. However, the analysis of histological samples by visual inspection is subjective and could be challenging. State-of-the-art object detection algorithms can be trained for cell spotting in a WSI. In this work, a new framework for the detection of tumor cells in high-resolution and high-detail using both RGB and Hyperspectral (HS) imaging is proposed. The framework introduces techniques to be trained on partially labeled data, since labeling at the cellular level is a time and energy-consuming task. Furthermore, the framework has been developed for working with RGB and HS information reduced to 3 bands. Current results are promising, showcasing in RGB similar performance as reference works (F1-score = 66.2%) and high possibilities for the integration of reduced HS information into current state-of-art deep learning models, with current results improving the mean precision a 6.3% from synthetic RGB images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.