We present a novel approach to achieve high-speed depth-resolved two-photon imaging through the development of a deep-learning-based temporal-focusing two-photon microscope utilizing the De-scattering with Excitation Patterning (DEEP) method, referred to as DEEP-Line. DEEP-Line incorporates a line-scanning scheme, widefield detection utilizing a high-speed Silicon Photomultiplier array, and employs deep-learning-based image reconstruction. The performance of our system is validated using diverse biological samples. Our imaging method achieves orders of magnitude improvement in speed by reducing excitation patterns to several tens and employing MHz parallel detections. Furthermore, our approach can enable fluorescence lifetime imaging and enhances axial resolution.
Temporal focusing multiphoton excitation microscopy (TFMPEM) can rapidly provide 3D imaging in neuroscience; however, due to the widefield illumination and the use of camera detector, the strong scattering of emission photons through biotissue will degrade the image quality and reduce the penetration depth. As a result, TFMPEM images suffers from poor spatial resolution and low signal-to-noise ratio (SNR), burying weak fluorescent signals of small structures such as neurons in calyx part, especially for deep layers under fast acquisition rate. In the study, we present a prediction learning model with depth information to overcome. First, a point-scanning multiphoton excitation microscopy (PSMPEM) image as the gold standard was precisely registered to the corresponding TFMPEM image via a linear affine transformation and an unsupervised VoxelMorph network. Then, a multi-stage 3D U-Net model with cross-stage feature fusion mechanism and self-supervised attention module has been developed to restore shallow layers of drosophila mushroom body under cross-modality training. Furthermore, a convolutional long short-term memory (ConvLSTM)- based network with PhyCell, which is designed to forecast the deeper information according to previous 3D information, is introduced for the prediction of depth information.
KEYWORDS: Matrices, Computer generated holography, Digital holography, 3D modeling, Computation time, Spatial light modulators, Holograms, Data modeling, Temporal resolution, Neurons
To perform real-time stimulation of neurons and simultaneous observation of the neural connectome, a deep learning-based computer-generated holography (DeepCGH) system has been developed. This system utilized a neural network to generate a hologram, which is then real-time projected onto a high refresh rate spatial light modulator (SLM) to generate fast 3D micropatterns. However, DeepCGH had two limitations: the computation time is increased as the number of input layers grew, and it cannot reconstruct arbitrary 3D micropatterns within the same model. To address these issues, integrated a digital propagation matrix (DPM) into the DeepCGH data preprocessing to generate arbitrary 3D micropatterns within the same model and reduce the computation time. Furthermore, to incorporate temporal focusing confinement (TFC), the axial resolution (FWHM) is improved from 30 μm to 6 μm, and then it can avoid to excite other cells. As a result, the DeepCGH with DPM system is able to timely generate customized micropatterns within a 150-μm volume with high accuracy. With DPM, the DeepCGH was able to generate arbitrary 3D micropatterns and further save 50% computation time. Additionally, the DeepCGH holograms achieve superior results in optical reconstruction and have high accuracy in both position and depth as combined with TFC.
Model predictive control (MPC) can use the state of the current measurement processing to predict future events and be able to take control processing accordingly. To implement MPC in our adaptive optics system (AOS), a multichannel state-space model is first identified between the driving voltage for a 61-channel deformable mirror (DM) as the input and the 8-order Zernike polynomial coefficients via a lab-made Shack-Hartmann wavefront sensor (SHWS) as the output. Conventionally, the center of weight algorithm is utilized to reconstruct the wavefront from SHWS, but it takes a lot of computation time. Therefore, a deep learning (DL) approach based on U-Net is adopted to rapid reconstruct the wavefront. The U-Net significantly reduces the time to compute the wavefront and also gets the higher accuracy. After that, the MPC controller based on the identified system model is implemented in AOS. Currently, the simulation results demonstrate that the MPC with the DL-SHWS can fast correct the wavefront aberration. Eventually, the MPC-based AOS will be implemented under Robot Operating System (ROS) to achieve real-time control.
The customized 3D illumination patterns can be generated with computer-generated holography (CGH), and the axial confinement of the illumination patterns can be improved by inducing the temporal focusing technique. Through these approaches, the neuron excitation in single-cell resolution can be achieved. However, due to the computation cost of iterative CGH algorithm, the hologram must be pre-calculated to generate the illumination patterns for neuron excitation. This shortcoming makes it difficult to dynamically stimulate the neurons for observing neural activity. To overcome this issue for real-time dynamic neuron stimulation, we develop a neuron stimulation system with single-cell resolution and a real-time CGH algorithm. For single-cell resolution, a diffraction grating is used to generate the temporal focusing effect. Moreover, we design a deep-learning based CGH algorithm considering temporal focusing effect to real-time generate hologram with the pre-trained U-net architecture for producing customized illumination patterns in 3D positions. In our approach, the dynamic 3D micro-patterned single-cell neural excitation can be achieved by inducing temporal focusing technique to improve the axial resolution to few microns level and generating hologram by deep-learning based CGH considering temporal focusing to speed up the computation time to tens of milliseconds.
Light field fluorescence microscopy (LFM) can provide three-dimensional (3D) images in one snapshot, but essentially lighting up the entirety of the sample, even though only a part of the sample is meaningfully captured in the reconstruction. Thus, entire illumination introduces extraneous background noise, degrading the contrast and accuracy of the final reconstructed images. In this paper, temporal focusing-based multiphoton illumination (TFMI) has the advantage of widefield multiphoton excitation with volume selective excitation. We implement the TFMI to LFM, illuminating only the volume of interest, thus significantly reducing the background. Furthermore, offering higher penetration depth in scattering tissue via multiphoton. In addition, the volume range can be varied by modulating the size of the Fourier-plane aperture of objective lens. 100 nm fluorescence beads are used to examine the lateral and axial resolution after phase space deconvolution from light field image, the experimental results show that the lateral resolution is around 1.2 μm and axial resolution is around 1.6 μm close to the focal plane. Furthermore, the mushroom body of drosophila brain which carried a genetic fluorescent marker GFP (OK-107) are used to demonstrate volumetric bioimaging capability.
Significance: Line scanning-based temporal focusing multiphoton microscopy (TFMPM) has superior axial excitation confinement (AEC) compared to conventional widefield TFMPM, but the frame rate is limited due to the limitation of the single line-to-line scanning mechanism. The development of the multiline scanning-based TFMPM requires only eight multiline patterns for full-field uniform multiphoton excitation and it still maintains superior AEC.
Aim: The optimized parallel multiline scanning TFMPM is developed, and the performance is verified with theoretical simulation. The system provides a sharp AEC equivalent to the line scanning-based TFMPM, but fewer scans are required.
Approach: A digital micromirror device is integrated in the TFMPM system and generates the multiline pattern for excitation. Based on the result of single-line pattern with sharp AEC, we can further model the multiline pattern to find the best structure that has the highest duty cycle together with the best AEC performance.
Results: The AEC is experimentally improved to 1.7 μm from the 3.5 μm of conventional TFMPM. The adopted multiline pattern is akin to a pulse-width-modulation pattern with a spatial period of four times the diffraction-limited line width. In other words, ideally only four π / 2 spatial phase-shift scans are required to form a full two-dimensional image with superior AEC instead of image-size-dependent line-to-line scanning.
Conclusions: We have demonstrated the developed parallel multiline scanning-based TFMPM has the multiline pattern for sharp AEC and the least scans required for full-field uniform excitation. In the experimental results, the temporal focusing-based multiphoton images of disordered biotissue of mouse skin with improved axial resolution due to the near-theoretical limit AEC are shown to clearly reduce background scattering.
KEYWORDS: Head, Control systems, Laser scanners, Cameras, Laser systems engineering, RGB color model, Mirrors, Agriculture, Pulsed laser operation, Laser development
In this study, a smart rapid laser scanning system with 3D small object detection for disabling caterpillars has integrated. A monocular camera vision system was developed which works in tandem with rapid laser scanning system. Two caterpillar species Orgyia Postica and Porthesia Taiwana were considered and their original images were used to train YOLO for identification. Color transform from RGB to HSV was applied to Orgyia Postica, and for Porthesia Taiwana, RGB color space was maintained, to successfully detect the caterpillar’s head. The centre of the caterpillar’s head was positively approximated by using k-means clustering algorithm. These identified coordinates were then exposed to automatically controlled laser beam. Compact CW lasers of wavelength 450 nm with 1.738 W power with beam diameter 2.5 mm was used and their respective effects were studied. The entire setup was controlled using the NVIDIA Jetson TX2 embedded system. It was observed that even a precise exposure of second long laser beams on the head incapacitated the caterpillar from further ingestion of food. Therefore, this synergistic utilization of deep learning and lasers seems to be a beneficial and promising approach to effectively control the pest population, thereby preventing crop damage and improving the yield.
In this study, we implement temporal-focusing multiphoton selective excitation (TFMPSE) to light field microscopy (LFM), illuminating only the volume of interest, thus significantly reducing the background noise and providing higher contrast and accuracy for the light field image reconstruction; furthermore, offering higher penetration depth in scattering tissue via multiphoton. 3D human-skin in situ immunofluorescence images are used to demonstrate volumetric bioimaging capability. The volume rate of the TFMPSE-LFM can achieve around 100 volumes per second
Light field technique at a single shot can get the whole volume image of observed sample. Therefore, the original frame rate of the optical system can be taken as the volumetric image rate. For dynamically imaging whole micron-scale biosample, a light field microscope with temporal focusing illumination has been developed. In the light field microscope, the f-number of the microlens array (MLA) is adopted to match that of the objective; hence, the subimages via adjacent lenslets do not overlay each other. A three-dimensional (3D) deconvolution algorithm is utilized to deblur the out-of-focusing part. Conventional light field microscopy (LFM) illuminates whole volume sample even noninteresting parts; nevertheless, whole volume excitation causes even more damage on bio-sample and also increase the background noise from the out of range. Therefore, temporal focusing is integrated into the light field microscope for selecting the illumination volume. Herein, a slit on the back focal plane of the objective is utilized to control the axial excitation confinement for selecting the illumination volume. As a result, the developed light field microscope with the temporal focusing multiphoton illumination (TFMPI) can reconstruct 3D images within the selected volume, and the lateral resolution approaches to the theoretical value. Furthermore, the 3D Brownian motion of two-micron fluorescent beads is observed as the criterion of dynamic sample. With superior signal-to-noise ratio and less damage to tissue, the microscope is potential to provide volumetric imaging for vivo sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.