SignificanceMueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists.AimIn this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples.ApproachWe adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region.ResultsPathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods.ConclusionsBy comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Early diagnosis and fast screening of cervical cancer is the key to prognosis of treatment and patient survival. Polarimetry technique with high sensitivity to microstructures and low requirement for resolution is promising at facilitating the fast screening and quantitative diagnosis. In this study, we apply the Mueller matrix microscope and multichannel convolutional neural network for the detection of human cervical intraepithelial neoplasia (CIN) samples from normal samples. The Mueller matrix polar decomposition and transformation parameters, rotation invariant parameters, and Mueller matrix symmetry-related parameters of the cervical tissues in epithelial region and at different stages are calculated and analyzed. For detection of early cervical lesions, the selection method of polarimetry parameters based on statistical features and multichannel convolutional neural network (CNN) for classification are proposed. To illustrate, we select the input parameters of CNN models from all commonly used polarimetry parameters according to the amount of information which are evaluated by the mean value, standard deviation, and information entropy of all pixels in 2D parameters images of the training samples. In multichannel CNN classification, each selected parameter is treated as an input of a channel. The proper multichannel CNN models learn deep features from the selected polarimetry parameters of training samples and show good performance for detecting CIN samples under a low-resolution system.
We propose a cross-modality method that translates polarimetric images into bright-field. In the lung tissue histological analysis, immunohistochemical (IHC) staining of tissues is widely used to specify particular cellular events especially in precision medicine. In this work, we measured hematoxylin and eosin (HE) stained slices by Mueller matrix (MM) microscopy and then fed polarimetric data into a well-designed generative adversarial network (GAN). The network can generate images that are equivalent to the IHC stained from bright-field microscopy. This will assist pathologists with the real IHC staining procedure and pathological diagnosis. Instead of preparing specimens from scratch, we collected already existing specimens, i.e., the adjacent HE and IHC stained slices from the same tissue volume. We adopted the CycleGAN to learn the translation between unaligned images from two domains. We used a U-Net based generator and a PixelGAN based discriminator in the model. The efficacy of this method was demonstrated on smooth muscle actin (SMA) staining in lung tissue. The results are evaluated by three image quality assessment methods by comparing the generated and real staining images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.