We describe a simple convolutional network for blind unmixing of transient absorption microscopy data along with a model ensembling strategy. Our network is based on an autoencoder previously developed for blind unmixing of hyperspectral satellite images. Its advantages are (a) that it learns to unmix spectra by unsupervised learning, i.e. by learning to reconstruct imaging data, without knowledge of the underlying spectra or their abundances and (b) that the endmember spectra are directly encoded by the output layer’s coefficients. Extensive modifications to both the network architecture and training loss functions were necessary to produce reasonable performance on transient absorption data. We demonstrate results from blind unmixing of transient absorption images of unstained muscle fibers, acquired at 520 nm pump and 620 nm probe, training an ensemble of 500 different networks (i.e. unmixing models), each starting from a different random initialization. Variability among resulting models was analyzed by principal component analysis on the recovered endmembers from all models, deriving from the projections a model probability density function. We found consistent models (predicting similar endmembers and abundance maps) near the most likely model and surrounding high-probability region, with more variability in low-probability regions. Then, a permutation-aligned average of the ensemble produced much better results than an unweighted ensemble average, or simple selection of one model based on maximum likelihood or best fit. We anticipate this approach of parametrizing models and ensembling based on relative probability to have applications in other chemical imaging modalities such as FLIM, Raman microscopy and mass spectroscopic imaging.
Transient absorption microscopy (TAM) provides imaging contrast from absorptive pigments such as hemeproteins and melanin, based on femtosecond to picosecond-timescale relaxation dynamics. TAM operates by exciting the sample with a short pump pulse, then measuring the time-dependent change in optical absorption, after excitation, with a probe pulse. Here we show that a 520nm pump and 620nm probe provides label-free imaging contrast for hemoglobin, myoglobin, and respiratory chain hemes of mitochondria with sensitivity to redox. We also introduce a simple convolutional neural network for analysis of TAM stacks. Finally, we will discuss future clinical applications to mitochondrial disease.
Nonlinear and ultrafast microscopy techniques enable label-free chemical imaging with high sensitivity, specificity, and optical resolution. However, reliance on specialized high-intensity femtosecond laser sources makes these techniques expensive and introduces a risk for sample damage. Simpler linear imaging methods, such as reflectance confocal microscopy, only sense variations in refractive index, and lack clear contrast provided by nonlinear techniques. But in the case of biological samples, where there is often a structure-function relationship (e.g. a cell’s mitochondrial network dynamically rearranges itself in response to metabolic activity), the kind of chemical information picked up by nonlinear techniques might be inferred from linear reflectance texture. If such a mapping can be learned, multiphoton-like contrast could be synthesized with images from much simpler instrumentation.
Our approach to synthetic nonlinear microscopy employs machine learning technique involving convolutional neural network, namely U-net, that has demonstrated promising performance in segmentation of biomedical images. We have incorporated a nonlinear laser-scanning microscope with a confocal detection channel in order to acquire a training dataset of co-registered reflectance and nonlinear images. Results will be presented, along with a discussion on how well we can expect the trained network to generalize to new specimens.
Multiphoton microscopy imaging techniques provide molecule specific contrast and can produce in vivo histopathology with clearly recognizable features such as cellular and nuclear morphology, collagen, etc. Despite this advantage, high cost, risk of damage from high-intensity pulses, and lack of FDA approval prevents widespread adoption of multiphoton microscopy techniques in conventional clinical scenario. Reflectance confocal microscopy, on the other hand, is much more affordable for clinical scenario, as it is FDA approved, can perform in vivo, non-invasive imaging of specimens with less risk of DNA damage, and even has been granted insurance reimbursement codes. However, the images obtained by reflectance confocal have little resemblance to traditional histopathology due to graininess in the images, and lack molecule specific contrast which makes the images more challenging to interpret and determine a diagnosis,. We propose brining multiphoton-like contrast to confocal instruments by a neural network trained on a set of co-registered reflectance confocal and multiphoton images. We assume that the local reflectance texture of cytoplasm, nuclei, melanin and cytoplasm are distinct within a cell. Once the neural network has been trained, it would be able to distinguish these structures, and produce clear histology-like images from the grainy confocal reflectance data. Our preliminary training results show a successful estimation of multiphoton images from reflectance confocal images by training a 3 layer neural network on a set of 1000 32x32 image patches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.