The traditional histochemical staining of autopsy tissue samples usually suffers from staining artifacts due to autolysis caused by delayed fixation of cadaver tissues. Here, we introduce an autopsy virtual staining technique to digitally convert autofluorescence images of unlabeled autopsy tissue sections into their hematoxylin and eosin (H&E) stained counterparts through a trained neural network. This technique was demonstrated to effectively mitigate autolysis-induced artifacts inherent in histochemical staining, such as weak nuclear contrast and color fading in the cytoplasmic-extracellular matrix. As a rapid, reagent-efficient, and high-quality histological staining approach, the presented technique holds great potential for widespread application in the future.
We present a fast virtual-staining framework for defocused autofluorescence images of unlabeled tissue, matching the performance of standard virtual-staining models using in-focus label-free images. For this, we introduced a virtual-autofocusing network to digitally refocus the defocused images. Subsequently, these refocused images were transformed into virtually-stained H&E images using a successive neural network. Using coarsely-focused autofluorescence images, with 4-fold fewer focus points and 2-fold lower focusing precision, we achieved equivalent virtual-staining performance to standard H&E virtual-staining networks that utilize finely-focused images, helping us decrease the total image acquisition time by ~32% and the autofocusing time by ~89% for each whole-slide image.
We report label-free, in vivo virtual histology of skin using reflectance confocal microscopy (RCM). We trained a deep neural network to transform in vivo RCM images of unstained skin into virtually stained H&E-like microscopic images with nuclear contrast. This framework successfully generalized to diverse skin conditions, e.g., normal skin, basal cell carcinoma, and melanocytic nevi, as well as distinct skin layers, including the epidermis, dermal-epidermal junction, and superficial dermis layers. This label-free in vivo skin virtual histology framework can be transformative for faster and more accurate diagnosis of malignant skin neoplasms, with the potential to significantly reduce unnecessary skin biopsies.
We present a virtual staining framework that can rapidly stain defocused autofluorescence images of label-free tissue, matching the performance of standard virtual staining models that use in-focus unlabeled images. We trained and blindly tested this deep learning-based framework using human lung tissue. Using coarsely-focused autofluorescence images acquired with 4× fewer focus points and 2× lower focusing precision, we achieved equivalent performance to the standard virtual staining that used finely-focused autofluorescence input images. We achieved a ~32% decrease in the total image acquisition time needed for virtual staining of a label-free whole-slide image, alongside a ~89% decrease in the autofocusing time.
We present a deep learning-based framework to virtually transfer images of H&E-stained tissue to other stain types using cascaded deep neural networks. This method, termed C-DNN, was trained in a cascaded manner: label-free autofluorescence images were fed to the first generator as input and transformed into H&E stained images. These virtually stained H&E images were then transformed into Periodic acid–Schiff (PAS) stain by the second generator. We trained and tested C-DNN on kidney needle-core biopsy tissue, and its output images showed better color accuracy and higher contrast on various histological features compared to other stain transfer models.
Histochemical staining is traditionally performed using chemical labeling, which can be time consuming and expensive, particularly when multiple stains are needed. We present a technique which can be used to virtually stain histological tissues using deep learning. As this technique is performed computationally, multiple stains can be performed on each tissue, allowing pathologists to get more information out of a single tissue section. These stains can be performed using autofluorescence images of unlabeled tissue sections, or with scans of stained H&E stained tissues, which fits into existing pathology workflows. These stains have been validated in blind studies by board-certified pathologists.
We present a virtual immunohistochemical (IHC) staining method based on label-free autofluorescence imaging and deep learning. Using a trained neural network, we transform multi-band autofluorescence images of unstained tissue sections to their bright-field equivalent HER2 images, matching the microscopic images captured after the standard IHC staining of the same tissue sections. Three pathologists’ blind evaluations of HER2 scores based on virtually stained and IHC-stained whole slide images revealed the statistically equivalent diagnostic values of the two methods. This virtual HER2 staining method provides a rapid, accurate, and low-cost alternative to the standard IHC staining methods and allows tissue preservation.
Immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) is routinely performed on breast cancer cases to guide immunotherapies and help predict the prognosis of breast tumors. We present a label-free virtual HER2 staining method enabled by deep learning as an alternative digital staining method. Our blinded, quantitative analysis based on three board-certified breast pathologists revealed that evaluating HER2 scores based on virtually-stained HER2 whole slide images (WSIs) is as accurate as standard IHC-stained WSIs. This virtual HER2 staining can be extended to other IHC biomarkers to significantly improve disease diagnostics and prognostics.
Reflectance confocal microscopy (RCM) can provide in vivo images of the skin with cellular-level resolution; however, RCM images are grayscale, lack nuclear features and have a low correlation with histology. We present a deep learning-based virtual staining method to perform non-invasive virtual histology of the skin based on in vivo, label-free RCM images. This virtual histology framework revealed successful inference for various skin conditions, such as basal cell carcinoma, also covering distinct skin layers, including epidermis and dermal-epidermal junction. This method can pave the way for faster and more accurate diagnosis of malignant skin neoplasms while reducing unnecessary biopsies.
We present a supervised learning approach to train a deep neural network which can transform images of H&E stained tissue sections into special stains (e.g., PAS, Jones silver stain and Masson’s Trichrome). We performed a diagnostic study using tissue sections from 58 subjects covering a variety of non-neoplastic kidney diseases to show that when the pathologists performed their diagnoses using the three virtually-created special stains in addition to H&E, a statistically significant diagnostic improvement was made over the use of H&E only. This virtual staining technique can be used to improve preliminary diagnoses while saving time and reducing costs.
We present a deep learning-enabled holographic polarization microscope that only requires one polarization state to image/quantify birefringent specimen. This framework reconstructs quantitative birefringence retardance and orientation images from the amplitude/phase information obtained using a lensless holographic microscope with a pair of polarizer and analyzer. We tested this technique with various birefringent samples including monosodium urate and triamcinolone acetonide crystals to demonstrate that the deep network can accurately reconstruct the retardance and orientation image channels. This method has a simple optical design and presents a large field-of-view (>20-30mm2), which might broaden the access to advanced polarization microscopy techniques in low-resource-settings.
We present a deep-learning based device to perform automated screening of sickle cell disease (SCD) using images of blood smears captured by a smartphone-based microscope. We experimentally validated the system using 96 blood smears (including 32 positive samples for SCD), each coming from a unique patient. Tested on these blood smears, our framework achieved a 98% accuracy and had an area-under-the-curve (AUC) of 0.998. Since this technique is both low-cost and accurate, it has the potential to improve access to cost-effective screening and monitoring of patients in low resource settings – particularly in areas where existing diagnostic methods are unsuitable.
We virtually generate multiple histological stains through a single deep-neural-network, using at its input autofluorescence images of the unlabeled tissue alongside a user-defined digital-staining-matrix. By feeding this digital-staining-matrix to the network, the user indicates which stain to apply on each pixel or region-of-interest, enabling virtual blending of multiple stains according to a desired micro-structure map. We demonstrated this technique by applying combinations of different stains (H&E, Masson’s Trichrome and Jones silver stain) on blindly-tested, unlabeled tissue sections. This technology avoids the histochemical staining process and enables newly-generated stains and stain-combinations to be used for inspection of label-free tissue microstructure.
We report a label-free, field-portable, holographic imaging flow cytometer that can automatically detect and count Giardia lamblia cysts in water samples with a throughput of 100 mL/h. Our cytometer has the dimensions of 19×19×16 cm and a laptop computer-connected to it reconstructs the phase and intensity images of the flowing microparticles in the sample at three different wavelengths and classifies them by a trained convolutional neural network, thereby detecting the Giardia cysts in real time. We experimentally demonstrated that our system can detect Giardia contamination in fresh and seawater samples containing as low as <10 cysts/50 mL.
We present a field-portable and high-throughput imaging flow-cytometer, which performs phenotypic analysis of microalgae using image processing and deep learning. This computational cytometer weighs ~1.6kg, and captures holographic images of water samples containing microalgae, flowing in a microfluidic channel at a rate of 100mL/h. Automated analysis is performed by extracting the spatial and spectral features of the reconstructed images to automatically identify/count the target algae within the sample, using image processing and convolutional neural networks. Changes within the measured features and the composition of the microalgae can be rapidly analyzed to reveal even minute deviations from the normal state of the population.
KEYWORDS: Holography, Pathology, Color imaging, Imaging systems, Microscopy, Tissues, 3D image reconstruction, Digital holography, RGB color model, Image processing
We present a deep learning-based, high-throughput, accurate colorization framework for holographic imaging systems. Using a conditional generative adversarial network (GAN), this method can be used to remove the missing-phase-related spatial artifacts using a single hologram. When compared to the absorbance spectrum estimation method, which is the current state-of-the art method used to perform color holographic reconstruction, this framework is able to achieve a similar performance while requiring 4-fold fewer input images and 8-fold less imaging and processing time. The presented method can effectively increase the throughput for color holographic microscopy, providing the possibility for histopathology in resource limited environment.
We present a super-resolution framework for coherent imaging systems using a generative adversarial network. This framework requires a single low-resolution input image, and in a single feed-forward step it performs resolution enhancement. To validate its efficacy, both a lensfree holographic imaging system with a pixel-limited resolution and a lens-based holographic imaging system with diffraction-limited resolution were used. We demonstrated that for both the pixel-limited and diffraction-limited coherent imaging systems, our method was able to effectively enhance the image resolution of the tested biological samples. This data-driven super resolution framework is broadly applicable to various coherent imaging systems.
We demonstrate a deep learning-based technique which digitally stains label-free tissue sections imaged by a holographic microscope. Our trained deep neural network can use quantitative phase microscopy images to generate images equivalent to the same field of view of the specimen, once stained and imaged by a brightfield microscope. We prove the efficacy of this technique by implementing it with different tissue-stain combinations involving human skin, kidney, and liver tissue, stained with Hematoxylin and Eosin, Jones’ stain, and Masson’s trichrome stain, respectively, generating images with equivalent quality to the brightfield microscopy images of the histochemically stained corresponding specimen.
We report a deep learning-based framework which can be used to screen thin blood smears for sickle-cell-disease using images captured by a smartphone-based microscope. This framework first uses a deep neural network to enhance and standardize the smartphone images to the quality of a diagnostic level benchtop microscope, and a second deep neural network performs cell segmentation. We experimentally demonstrated that this technique can achieve 98% accuracy with an area-under-the-curve (AUC) of 0.998 on a blindly tested dataset made up of thin blood smears coming from 96 patients, of which 32 had been diagnosed with sickle cell disease.
KEYWORDS: Holograms, Holography, Microscopy, 3D modeling, Signal to noise ratio, 3D image reconstruction, Stereoscopy, Microscopes, Speckle, Time metrology
Holographic microscopy encodes the 3D information of a sample into a single hologram. However, holographic images are in general inferior to bright-field microscopy images in terms of contrast and signal-to-noise ratio, due to twin-image artifacts, speckle and out-of-plane interference. The contrast and noise problem of holography can be mitigated using iterative algorithms, but at the cost of additional measurements and time. Here, we present a deep-learning-based cross-modality imaging method to reconstruct a single hologram into volumetric images of a sample with bright-field contrast and SNR, merging the snapshot 3D imaging capability of holography with the image quality of bright-field microscopy.
We present a method to generate multiple virtual stains on an image of label-free tissue using a single deep neural network according to a user-defined micro-structure map. The input to this network comes from two sources: (i) autofluorescence microscopy images of the unlabeled tissue, (ii) a user-defined digital-staining-matrix. This digital-staining-matrix indicates which stain is to be virtually-generated for each pixel, and can be used to create a micro-structured stain map, or virtually blend stains together. We experimentally validated this approach using blind-testing of label-free kidney tissue sections, and successfully generated combinations of H and E, Masson’s Trichome stain, and Jones silver stain.
We present a deep learning-based framework to perform single image super-resolution of SEM images. We experimentally demonstrated that this network can enhance the resolution of SEM images by two-fold, allowing for a reduction of the scanning time and electron dosage by four-fold without any significant loss of image quality. Using blindly tested regions of a gold-on-carbon resolution test target, we quantitatively and qualitatively confirmed the image enhancement achieved by the trained network. We believe that this technique has the potential to improve the SEM imaging process, particularly in cases where imaging throughput and minimizing beam damage are of utmost importance.
We report a generative adversarial network (GAN)-based framework to super-resolve both pixel-limited and diffraction-limited images, acquired by coherent microscopy. We experimentally demonstrate a resolution enhancement factor of 2-6× for a pixel-limited imaging system and 2.5× for a diffraction-limited imaging system using lung tissue sections and Papanicolaou (Pap) smear slides. The efficacy of the technique is proven both quantitatively and qualitatively by a direct visual comparison between the network’s output images and the corresponding high-resolution images. Using this data driven technique, the resolution of coherent microscopy can be improved to substantially increase the imaging throughput.
KEYWORDS: Digital holography, Holography, Microscopy, 3D image reconstruction, Digital imaging, Holograms, Digital recording, Speckle, 3D image processing, Wave propagation interference
We demonstrate a deep learning-based hologram reconstruction method that achieves bright-field microscopy image contrast in digital holographic microscopy (DHM), which we termed as “bright-field holography”. In bright-field holography, a generative adversarial network was trained to transform a complex-valued DHM reconstruction (obtained without phase-retrieval) into an equivalent image captured by a high-NA bright-field microscope, corresponding to the same sample plane. As a proof-of-concept, we demonstrated snapshot imaging of pollen samples distributed in 3D, digitally matching the contrast and shallow depth-of-field advantages of bright-field microscopy; this enabled us to digitally image a sample volume using bright-field holography without any physical axial scanning.
We report a deep learning-based colorization framework for holographic microscopy, and demonstrate its efficacy by imaging histopathology slides (Masson’s trichrome-stained lung and H&E-stained prostate tissue). Using a generative adversarial network, this framework is trained to eliminate the missing-phase-related artifacts. To obtain accurate color information, the pathology slides were imaged under multiplexed illumination at three wavelengths, and the deep network learns to demultiplex and project the holographic images from the three color channels into the RGB color-space, achieving high color-fidelity. Our method dramatically simplifies the data acquisition and shortens the processing time, which is important for e.g., digital pathology in resource-limited-settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.