We show that a custom ResNet-inspired CNN architecture trained on simulated biomolecule trajectories surpasses the performance of standard algorithms in terms of tracking and determining the molecular weight and hydrodynamic radius of biomolecules in the low-kDa regime in optical microscopy. We show that high accuracy and precision is retained even below the 10-kDa regime, constituting approximately an order of magnitude improvement in limit of detection compared to current state-of-the-art, enabling analysis of hitherto elusive species of biomolecules such as cytokines (~5-25 kDa) important for cancer research and the protein hormone insulin (~5.6 kDa), potentially opening up entirely new avenues of biological research.
We present LodeSTAR, an unsupervised, single-shot object detector for microscopy. LodeSTAR exploits the symmetries of problem statements to train neural networks using extremely small datasets and without ground truth. We demonstrate that LodeSTAR is comparable to state-of-the-art, supervised deep learning methods, despite training on orders of magnitude less training data, and no annotations. Moreover, we demonstrate that LodeSTAR achieves near theoretically optimal results in terms of sub-pixel positioning of objects of various shapes. Finally, we show that LodeSTAR can exploit additional symmetries to measure additional particle properties, such as the axial position of objects and particle polarizability.
We demonstrate a new technique that combines holographic microscopy and deep learning to track microplankton through multiple generations, and measure their 3D positions and dry mass. The method is minimally invasive and non-destructive to the plankton cells, allowing us to study their trophic interactions, feeding events, and bio mass increase throughout the cell cycle. We evaluate the method on various plankton species belonging to different trophic levels, and observe the dry mass transfer during feeding interactions and diatom growth dynamics. Our approach provides a valuable tool for understanding microplankton behaviour and interactions in the oceanic food web.
This work introduces MAGIK, a geometric deep learning framework for characterizing dynamic properties from time-lapse microscopy. MAGIK exploits geometric deep learning capability to capture the full spatiotemporal complexity of biological experiments using Graph Attention Networks. By processing object features with geometric priors, the neural network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties of the biological system. We demonstrate the flexibility and reliability of MAGIK by applying it to real and simulated data corresponding to a broad range of biological experiments.
We present LodeSTAR, a label-free, single-shot particle tracker. We design a method for exploiting the symmetries of problem statements to train neural networks using extremely small datasets and without ground truth. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy and that it reliably tracks experimental data of packed cells. Finally, we show that LodeSTAR can exploit additional symmetries to extend the measurable particle properties to the axial position of objects and particle polarizability.
Label-free characterization of biological matter across scales was recorded at SPIE Optics + Photonics held in San Diego, California, United States 2022.
We present a technique to track microplanktons through generations, and continuously measure their three-dimensional position and dry mass. By combining holographic microscopy with deep learning, the technique is minimally invasive and non-destructive for plankton cells, allowing quantitative assessments of trophic interactions such as feeding events, biomass increase throughout the cell cycle. We evaluate the performance of the method, by applying it to various plankton species belonging to different trophic levels. Finally, we demonstrate the dry mass transfer from cell to cell in prey-predator interactions, and show the growth dynamics from division to division in diatoms.
We show that a custom ResNet-inspired CNN architecture trained on simulated biomolecule trajectories surpasses the performance of standard algorithms in terms of tracking and determining the molecular weight and hydrodynamic radius of biomolecules in the low-kDa regime in NSM optical microscopy. We show that high accuracy and precision is retained even below the 10-kDa regime, constituting approximately an order of magnitude improvement in limit of detection compared to current state-of-the-art, enabling analysis of hitherto elusive species of biomolecules such as cytokines (~5-25 kDa) important for cancer research and the protein hormone insulin (~5.6 kDa), potentially opening up entirely new avenues of biological research.
In recent years, deep learning has been used widely to solve a variety of digital microscopy problems. We present ZEUS as a method to correct out of focus aberrations and denoise light-sheet microscopy images. First, a convolutional neural network is used to estimate the aberrations in terms of Zernike coefficients. Then those values are used to train a UNET that outputs corrected images from noisy and aberrated ones. With this approach, we can access scanning frequencies and image qualities equivalent to the most advanced LSM systems without the need for costly equipment and complex optical setups.
DeepTrack is an all-in-one deep learning framework for digital microscopy, attempting to bridge the gap between state of the art deep learning solutions and end-users. It provides tools for designing samples, simulating optical systems, training deep learning networks, and analyzing experimental data. We show the versatility of deep learning by solving a wide field of common problems in microscopy. Our hope is to serve as a platform for researchers to launch their solutions for the benifit of the entire field.
Quantitative analysis of cell structures is essential for pharmaceutical drug screening and medical diagnostics. This work introduces a deep-learning-powered approach to extract quantitative biological information from brightfield microscopy images. Specifically, we train a conditional generative adversarial neural network (cGAN) to virtually stain lipid droplets, cytoplasm, and nuclei from brightfield images of human stem-cell-derived fat cells (adipocytes). Subsequently, we demonstrate that these virtually-stained images can be successfully employed to extract quantitative biologically relevant measures in a downstream cell-profiling analysis. To make this method readily available for future applications, we provide a Python software package that is available online for free access.
Phytoplankton are responsible for approximately 50% of the biological fixation of carbon dioxide and oxygen production on Earth. The majority of the phytoplankton production is consumed by single-celled microscopic grazers, microzooplankton. In this experiment, we reproduce a small-scale alias of the plankton world to understand their feeding behavior. We use a lens-less holographic approach, driven by deep learning powered DeepTrack 2.0. We use a combination of U-net and CNN architectures to decipher the properties of radius, refractive index, heights and drymass of plankton. We further compare the results with standard approaches.
Characterization of nanoparticles in their native environment plays a central role in a wide range of fields, from medical diagnostics and nanoparticle-enhanced drug delivery to nanosafety and environmental nanopollution assessment.
I will present a label-free method to quantify size and refractive index of individual nanoparticles using two orders of magnitude shorter trajectories than required by standard methods, and without prior knowledge about the physicochemical properties of the medium. This is achieved through a weighted average convolutional neural network which analyzes holographic scattering images of single particles. I will demonstrate how deep learning enhanced holography opens up completely new possibilities to temporally characterize particle interactions and particle properties in complex environments.
Particles with dimensions smaller than the wavelength of visible light are essential in many fields. As particle size and composition greatly influence particle function, fast and accurate characterisation of these properties is important. Traditional approaches use the Brownian motion of the particles to deduce their size, and therefore requires to observe the particles for many consecutive time-steps. In addition, such techniques can only be applied in environments with known viscosity, hindering characterization in complex environments.
In this work, we demonstrate characterisation of subwavelength particle size and refractive index surpassing that of traditional methods using two orders of magnitude fewer observations of each particle, with no reference to particle motion. This opens up the possibility to characterise and temporally resolve the properties of subwavelength particles in complex environments where the relation between particle dynamics and size is unknown.
Digital Holographic Microscopy (DHM) has been a successful imaging technique for various applications in biomedical imaging, particle analysis, and optical engineering. Though DHM has been successful in reconstructing 3D volumes with stationary objects, it has still been a challenging task to track fast mobile objects. Recent advancements in deep learning with convolutional neural networks have been proven useful in solving experimental difficulties, starting from tracking single particles to multiple bacterial cells. Here, we propose a compact DHM driven by neural networks with a minimal amount of optical elements with an ultimate aim for easy usage and transportation.
DeepTrack is an all-in-one deep learning framework for digital microscopy, attempting to bridge the gap between state of the art deep learning solutions and end-users. It provides tools for designing samples, simulating optical systems, training deep learning networks, and analyzing experimental data. Moreover, the framework is packaged with an easy-to-use graphical user interface, designed to solve standard microscopy problems with no required programming experience. By specifically designing the framework with modularity and extendability in mind, we allow new methods to easily be implemented and combined with previous applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.