Endogenous chromophore mapping has been applied for distinguishing healthy and malignant tissue, but challenges with adapting these techniques for use with flexible endoscopes have limited exploration in gastrointestinal imaging. To enable investigative imaging in-vivo, a clinical colonoscope was retrofitted with custom fiber optics for coupling with both standard-of-care and external light sources. A multispectral illumination source with eight narrowband channels was constructed from multimode laser diodes for measuring tissue reflectance. Following benchtop validation with calibration targets, the system and correction methods were applied to human screening colonoscopies to estimate oxygen saturation of lesions and surrounding healthy tissue.
We present a modified colonoscope that allows for precise control over the illumination coherence, direction, and color. By capturing and processing images under different illumination conditions, this colonoscope generates maps of superficial blood flow, high spatial frequency 3D topography, reflectance, and chromophore concentrations. In this presentation, we describe the system design and characterize its contrast in benchtop experiments with various tissue phantoms. Finally, we will summarize our findings from using this multimodal imaging system on human participants undergoing colonoscopy screening.
With the rise in minimally invasive surgery and machine learning, there are emerging opportunities to improve patient outcomes with endoscopic techniques that quantify tissue shape and optical properties. We introduce a speckle-illumination stereo endoscope (SSE) that utilizes structured illumination to enhance both depth and optical property mapping. An SSE prototype was constructed and applied to fresh pig colon samples. SSE-estimated depth and optical property maps compare favorably to gold standard techniques. Requiring only minor modifications to existing commercial stereoscopes, the SSE could provide surgeons with improved visual depth perception and maps of biomarkers in vivo.
Screening colonoscopy is used to detect and remove lesions prior to progressing to colorectal cancer, but some lesions go undetected due to poor visual contrast in white light endoscopy. We present a retrofit clinical colonoscope capable of multispectral, topographic, and blood flow imaging for improving lesion contrast. We develop a custom fiber bundle to enable simultaneous illumination with commercial and research light sources. The research light source consists of nine wavelengths (405nm-659nm) for multispectral imaging and a high-coherence source for speckle-flow imaging. Point sources circling the image sensor are individually toggled to generate topographic maps with photometric stereo.
The complete blood count (CBC) is a foundational diagnostic test, but its accessibility is limited due to the blood draw, expensive laboratory equipment, and trained personnel required. Here, we present a cell phone microscope design for achieving phase contrast in high resolution capillary imaging, which allows individual blood cells to be imaged for a non-invasive CBC. The cell phone microscope uses a reversed lens as an objective to maintain a high resolution. Relay lenses create space for incorporation of an offset LED that can be critically imaged to produce oblique back illumination, resulting in phase contrast.
Urinalysis is an essential diagnostic tool in evaluating health and disease of the genitourinary tract. A urinalysis typically consists of dipstick testing, which can detect red blood cells, white blood cells, and bacteria, and microscopic evaluation of urine sediment after centrifugation, which further reveals other biomarkers such as crystals and casts. In the in-patient hospital setting, urinalysis is typically ordered after disease is suspected, drawing urine from the collection bag of a foley catheter and sending the sample to a core laboratory for analysis. To improve access to urine biomarkers, we propose a holographic lens free imaging (LFI) system that could allow automated bedside urine screening. LFI is uniquely suited for this task due to its low-cost, compact nature, and its ability to reconstruct large volumes from a single hologram without the depth-of-field trade-off of conventional microscopy. Here, we build and demonstrate an LFI system capable of detecting important biomarkers such as E. Coli in PBS and red blood cells, casts, and crystals in urinalysis control phantoms. In the future, this compact system could be connected to the drainage tube of a patient's foley catheter to enable real-time screening of urine at the bedside.
Oblique back-illumination capillaroscopy (OBC) has recently demonstrated clear images of unlabeled human blood cells in vivo. Combined with deep learning-based algorithms, this technology may enable non-invasive blood cell counting and analysis as flowing red blood cells, platelets, and white blood cells can be observed in their native environment. To harness the full potential of OBC, new techniques and methods must be developed that provide ground truth data using human blood cells. Here we present such a model, where human blood cells with paired ground truth information are imaged flowing in a custom tissue-mimicking micro fluidic device. This model enables the acquisition of OBC datasets that will help with both training and validating machine learning models for applications including the complete blood count, specific blood cell classification, and the study of hematologic disorders such as anemia.
Some tumor resection procedures, such as Mohs surgery, utilize intraoperative histology for tumor margin assessments. Gold standard rapid histology methods are time-consuming for patients under anesthetic and rapid freezing techniques are prone to artifacts. The recent development of microscopy with ultraviolet surface excitation (MUSE) introduces a new possibility for the rapid imaging of the cut tissue surface using fluorescent dyes. The high attenuation of ultraviolet light limits MUSE signals to thicknesses close to typical histology sections. To generate MUSE images with familiar H&E-like contrast, recent work has explored the transformation of MUSE images to “virtual” H&E-like images using unsupervised deep learning models trained on unpaired images of separate tissues treated with each stain. Here, we present a method for acquiring registered images of the same tissue with MUSE and real H&E imaging using sequential staining and dye removal. Tissue blocks are flash frozen and sectioned for mounting onto slides and staining with MUSE fluorescent dyes. After MUSE imaging, a sequential immersion of the slides in increasing concentrations of ethyl alcohol followed by rehydration, similar to steps in paraffin-based histology processing, is sufficient to remove all fluorescent dyes. Rinsed tissue slides are then subjected to traditional H&E staining and brightfield imaging. Data of registered image fields of skin and pancreas are presented along with initial machine learning-based transformations from MUSE to H&E contrast. This protocol will be useful to obtain paired images for training, testing, and quantitative validation of virtual H&E reconstructions from MUSE images.
Oblique plane microscopy (OPM) is a powerful tool for monitoring biological processes due to its capability for highresolution, rapid, optically-sectioned imaging through a single objective lens. Recently, our group has demonstrated scattering-contrast OPM (sOPM) as a technique to image blood cells in situ and in vivo. In order to classify blood cells visualized with sOPM, scattering data could be further leveraged and better understood. We present here a visualization and analysis of the scattering signal by masking and imaging the final Fourier plane of the sOPM system. We demonstrate the angular distribution of the scattering signal and image with several aperture masks. Microsphere phantoms are imaged in the image plane and Fourier plane to demonstrate the change in scattering behavior for Mie scatterers with large (4 micron) diameters and small (190 nm), Rayleigh-like scatterers similar to subcellular features such as granules. Circular apertures were used to isolate the side scattering centered at 90 degrees compared to the angular extremes. A Michelson contrast of 0.20-0.25 was observed for 4 micron diameter spheres and 0.05-0.10 for 190 nm diameter spheres using a split aperture. Microsphere sizes are classified from images using split aperture contrast and confirmed by fluorescence. Leveraging differential scattering angle contrast will enable the visual classification of blood cells, particularly white blood cells where granules and other organelles present distinct side scattering signals. Finally, the quantitative nature of the differential scattering angle contrast may enable machine-learning based classification and cell counting.
Oblique back-illumination capillaroscopy (OBC) has recently demonstrated high resolution, label-free images of human blood cells in vivo. This technology shows promise for a new chapter in blood analysis, where blood cell counts, morphology, and dynamics can be probed non-invasively. OBC provides high quality blood cell images when applied to the ventral tongue, where capillaries are superficial and melanin is minimal. However, the anatomy of this location has a unique and challenging constraints due to the highly muscular and mobile nature of the tongue, and its presence within the oral cavity. This manuscript presents a portable and ergonomic dual- channel OBC system that is optimized for imaging the ventral tongue. The portable OBC system uses pneumatic stabilization to reduce capillary motion and is built upon an ophthalmic slit lamp housing to allow comfortable stabilization of the head and fine, 3-axis translation of the imaging probe. The signal from two diametrically opposed LEDs (530nm and 650nm) are imaged onto two time-synchronized CMOS sensors, providing combined phase-weighted and absorption-weighted contrast of blood cells at 200 Hz with a 165 x 220μm field-of-view. This functional implementation of OBC technology will enable high resolution blood cell imaging of patients with hematologic disease.
The increasing performance and ubiquity of mobile phone cameras has led to several emerging opportunities for their use in global health and point-of-care diagnostics. High-resolution, low-cost microscopy can be achieved by pairing the cell phone lens with a second, identical lens in a reversed orientation, allowing 1x magnification over a large field of view. In previous work, we showed that reverse lens mobile phone capillaroscopy can visualize optical absorption gaps (OAGs) in nailfold capillaries. The frequency of these OAGs is known to be inversely correlated with degree of neutropenia. To extend this concept and enable the direct visualization of both red and white blood cells for more complete blood analysis, there is a need for improved resolution and phase contrast. Here, we present a design for a reverse lens mobile phone capillaroscope that pairs two different cell phone lenses to increase magnification for enhanced visualization. From an iPhone 12 Pro, the telephoto and wide cameras are combined with reversed wide and ultrawide lenses. The lens pairs provide magnification up to 4.02x and resolution up to approximately 1.49 μm, whereas the previous design only yielded a resolution of 3.75 μm. We use this system to image human blood in a microfluidic capillary phantom.
Machine learning offers a powerful set of tools to make widefield endoscopic imaging more quantitative. This presentation covers our work in estimating pixel size, topography, optical properties, and molecular chromophores using structured illumination and generative adversarial networks. We aim to create a computational endoscope that will improve computer-aided detection and diagnosis.
Diffuser-based sensing has shown potentials in inexpensive and compact optical systems. Here we demonstrate a low-cost diffuser-based computational funduscope that can recover pathological features of the model eye fundus. Our system implements an infinite-conjugate design by relaying the ocular lens onto the diffuser which provides shift-invariance across a wide field-of-view (FOV). Our experiments show that fundus images can be reconstructed over 33 degree FOV and our device is robust to 4D refractive error using a single point-spread-function.
Tissue oxygenation (StO2), which is the fraction of oxygenated hemoglobin in biological tissues, is an important biomarker that can reveal information about tissue viability and underlying pathologies. The continuous monitoring of StO2 is also useful for surgical guidance and patient management. In recent years, Spatial Frequency Domain Imaging (SFDI) has emerged as an elegant solution for mapping wide-field StO2. However, conventional SFDI requires capturing a sequence of images at different spatial frequencies and wavelengths, resulting in slow acquisition times and challenges with moving objects. Model-based single-snapshot techniques have shortened the acquisition time but introduce image artifacts and decrease accuracy. Here we propose a deep-learning technique for real-time StO2 mapping from snapshot structured light images. We train content-aware generative adversarial networks (OxyGAN) on pairs of structured light input at 659nm and 851nm wavelengths and StO2 ground truth predicted by conventional SFDI. We demonstrate that OxyGAN is not only capable of rapid data acquisition and processing but is also more accurate than a model-based benchmark. We also compare OxyGAN to a hybrid model that uses separate networks to estimate optical absorption at two wavelengths followed by Beer-Lambert fitting. The end-to-end OxyGAN approach shows better performance in terms of both speed and accuracy. We additionally demonstrate real-time OxyGAN by applying it to videos of in vivo tissues. OxyGAN has the potential to enable wide-field, real-time, and accurate tissue oxygenation measurements in many clinical applications.
Spatial Frequency Domain Imaging (SFDI) is a powerful technique for non-contact tissue optical property and chromophore mapping over a large field of view. However, a major challenge that limits the clinical adoption of SFDI is that it requires carefully-controlled imaging geometry and the projection of known spatial frequencies. We present speckle-illumination SFDI (si-SFDI), a projector-free technique that measures tissue optical properties from structured illumination formed by randomized speckle patterns. We compute the local power spectral density of images under speckle illumination, from which a high-frequency and a low-frequency tissueresponse parameter can be characterized for each pixel. A lookup table generated by Monte-Carlo simulations is subsequently used to accurately determine optical absorption and reduced scattering coefficients. Compared to conventional SFDI, si-SFDI may be particularly useful for endoscopic applications due to its utilization of simple coherent illumination, which makes it more easily incorporated into existing endoscopic systems. Moreover, speckle illumination offers a large depth of focus compared to projector-based illumination. In this study, we explore wide-field optical property mapping with an endoscope camera and fiber-coupled laser speckle illumination. We apply this technique to tissue-mimicking silicone phantoms and biological tissues. The accuracy of si-SFDI is evaluated by comparing to optical properties measured by conventional SFDI. Future work could accelerate si-SFDI reconstruction by using parallel computing or machine learning algorithms.
Significance: Spatial frequency-domain imaging (SFDI) is a powerful technique for mapping tissue oxygen saturation over a wide field of view. However, current SFDI methods either require a sequence of several images with different illumination patterns or, in the case of single-snapshot optical properties (SSOP), introduce artifacts and sacrifice accuracy.
Aim: We introduce OxyGAN, a data-driven, content-aware method to estimate tissue oxygenation directly from single structured-light images.
Approach: OxyGAN is an end-to-end approach that uses supervised generative adversarial networks. Conventional SFDI is used to obtain ground truth tissue oxygenation maps for ex vivo human esophagi, in vivo hands and feet, and an in vivo pig colon sample under 659- and 851-nm sinusoidal illumination. We benchmark OxyGAN by comparing it with SSOP and a two-step hybrid technique that uses a previously developed deep learning model to predict optical properties followed by a physical model to calculate tissue oxygenation.
Results: When tested on human feet, cross-validated OxyGAN maps tissue oxygenation with an accuracy of 96.5%. When applied to sample types not included in the training set, such as human hands and pig colon, OxyGAN achieves a 93% accuracy, demonstrating robustness to various tissue types. On average, OxyGAN outperforms SSOP and a hybrid model in estimating tissue oxygenation by 24.9% and 24.7%, respectively. Finally, we optimize OxyGAN inference so that oxygenation maps are computed ∼10 times faster than previous work, enabling video-rate, 25-Hz imaging.
Conclusions: Due to its rapid acquisition and processing speed, OxyGAN has the potential to enable real-time, high-fidelity tissue oxygenation mapping that may be useful for many clinical applications.
I will present a deep learning framework for content-aware estimation of tissue optical properties from wide-field images. Spatial frequency domain imaging is used to acquire ground-truth measurements of scattering and absorption coefficients of a variety of tissues. A generative network is then adversarially trained to estimate these properties from new tissues directly from unstructured or structured light. This data-driven approach has some advantages in accuracy and speed compared to model-based approaches.
Generative adversarial networks (GANs) are among the most interesting and powerful tools in deep learning. GANs are capable of efficiently generating realistic new images given a relatively small set of training examples, enabling many exciting possibilities in biophotonics. This tutorial will introduce the basic concepts and tools useful for creating GAN models. Additionally, several emerging applications of GANs in biophotonics will be covered, including: noise reduction, resolution enhancement, histological analysis, lesion detection, and lesion classification.
Capillaroscopy is a simple microscopy technique able to measure important clinical biomarkers non-invasively. For example, optical absorption gaps between red blood cells in capillary vessels of the nailfold have been shown to correlate with severity of neutropenia. The direct visualization of individual white blood cells with capillaroscopic techniques is elusive because it is challenging to generate epiillumination phase contrast in thick turbid media. Here, we evaluate white blood cell visibility with graded-field capillaroscopy in a flow phantom. We fabricate capillary phantoms with soft photolithography using PDMS doped with TiO2 and India ink to emulate skin optical properties. These glass-free phantoms feature channels embedded in scattering media at controlled depths (70-470 μm), as narrow as 15 x 15 μm, and permit blood flow up to 6 mm/s. We optimize the contrast of the graded-field capillaroscope in these tissue-realistic phantoms and demonstrate high speed imaging (200 Hz) of blood cells flowing through scattering media.
Automated segmentation of tissue and cellular structure in H&E images is an important first step towards automated histopathology slide analysis. For example, nuclei segmentation can aid with detecting pleomorphism and epithelium segmentation can aid in identification of tumor infiltrating lymphocytes etc. Existing deep learning-based approaches are often trained organ-wise and lack diversity of training data for multi-organ segmentation networks. In this work, we propose to augment existing nuclei segmentation datasets using cycleGANs. We learn an unpaired mapping from perturbed randomized polygon masks to pseudo-H&E images. We generate over synthetic H&E patches from several different organs for nuclei segmentation. We then use an adversarial U-Net with spectral normalization for increased training stability for segmentation. This paired image-to-image translation style network not only learns the mapping form H&E patches to segmentation masks but also learns an optimal loss function. Such an approach eliminates the need for a hand-crafted loss which has been explored significantly for nuclei segmentation. We demonstrate that the average accuracy for multi-organ nuclei segmentation increases to 94.43% using the proposed synthetic data generation and adversarial U-Net-based segmentation pipeline as compared to 79.81% when no synthetic data and adversarial loss was used.
Colorectal cancer is the fourth leading cause of cancer deaths worldwide, the standard for detection and prevention is the identification and removal of premalignant lesions through optical colonoscopy. More than 60% of colorectal cancer cases are attributed to missed polyps. Current procedures for automated polyp detection are limited by the amount of data available for training, underrepresentation of non-polypoid lesions and lesions which are inherently difficult to label and do not incorporate information about the topography of the surface of the lumen. It has been shown that information related to depth and topography of the surface of the lumen can boost subjective lesion detection. In this work, we add predicted depth information as an additional mode of data when training deep networks for polyp detection, segmentation and classification. We use conditional GANs to predict depth from monocular endoscopy images and fuse these predicted depth maps with RGB white light images in feature space. Our empirical analysis demonstrates that we achieve state-of-the-art results with RGB-D polyp segmentation with a 98% accuracy on four different publically available datasets. Moreover, we demonstrate a 87.24% accuracy on lesion classification. We also show that our networks can domain adapt to a variety of different kinds of data from different sources.
Skin cancer is the most commonly diagnosed cancer worldwide. It is estimated that there are over 5 million cases of skin cancer are diagnosed in the United States every year. Although less than 5% of all diagnosed skin cancers are melanoma it accounts for over 70% of skin cancer-related deaths. In the past decade, the number of melanoma cancer cases has increased by 53%. Recently, there has been significant work on segmentation and classification of skin lesions via deep learning. However, there is limited work on identifying attributes and clinically-meaningful visual skin lesion patterns from dermoscopic images. In this work, we propose to use conditional GANs for skin lesion segmentation and attribute detection and use these attributes to improve skin lesion classification. The proposed conditional GAN framework can generate segmentation and attribute masks from RGB dermoscopic images. The adversarial-image-to-image translation style architecture forces the generator to learn both local and global features. The Markovian discriminator classifies pairs of image and segmentation labels as being real or fake. Unlike previous approaches, such an architecture not only learns the mapping from dermoscopic images image to segmentation and attribute masks but also learns an optimal loss function to train such a mapping. We demonstrate that the such an approach significantly improves the Jaccard index for segmentation (with a 0.65 threshold) up to 0.893. Fusing the lesion attributes for classification of lesions yields a higher accuracy compared to those without predicted attributes.
Febrile neutropenia (FN) is a common cause of hospitalization for cancer patients undergoing chemotherapy treatment. To screen for FN, patients require invasive blood draws and complete blood cell counts, which increases risk of nosocomial infection while in an immunocompromised state. There is a pressing clinical need for non-invasive, point-of-care technology to frequently screen for FN, which, if detected early, can be prophylactically managed. A promising approach to address this need is capillaroscopy, through which blood cells are imaged in capillaries non-invasively. Visualization of shadows caused by absorption of individual red blood cells is currently achievable, and correlation between the absence of optical absorption gaps and severe neutropenia has been observed. However, a completely accurate identification of the physical origin of these optical absorption gaps for conclusive neutropenia diagnosis remains an elusive task. Here we present scattering oblique plane microscopy as a means of imaging moving scattering particles within a turbid medium with the goal of eventually imaging and characterizing blood cells in vivo flowing in superficial capillaries. Our imaging system illuminates an oblique light sheet through a capillary bed and collects back-scatter using a single objective at frame rates of >200 Hz. To validate this system, we develop phantoms mimicking capillaries with 200 μm diameter lumens embedded deep in silicone doped with TiO2 and India ink. Single 3 μm diameter polystyrene beads flowing through the capillaries are resolved with a signal to noise ratio of approximately 5:1 at a depth of 1 mean free path.
Colorectal cancer is the second leading cause of cancer deaths in the United States and causes over 50,000 deaths annually. The standard of care for colorectal cancer detection and prevention is an optical colonoscopy and polypectomy. However, over 20% of the polyps are typically missed during a standard colonoscopy procedure and 60% of colorectal cancer cases are attributed to these missed polyps. Surface topography plays a vital role in identification and characterization of lesions, but topographic features often appear subtle to a conventional endoscope. Chromoendoscopy can highlight topographic features of the mucosa and has shown to improve lesion detection rate, but requires dedicated training and increases procedure time. Photometric stereo endoscopy captures this topography but is qualitative due to unknown working distances from each point of mucosa to the endoscope. In this work, we use deep learning to estimate a depth map from an endoscope camera with four alternating light sources. Since endoscopy videos with ground truth depth maps are challenging to attain, we generated synthetic data using graphical rendering from an anatomically realistic 3D colon model and a forward model of a virtual endoscope with alternating light sources. We propose an encoder-decoder style deep network, where the encoder is split into four branches of sub-encoder networks that simultaneously extract features from each of the four sources and fuse these feature maps as the network goes deeper. This is complemented by skip connections, which maintain spatial consistency when the features are decoded. We demonstrate that, when compared to monocular depth estimation, this setup can reduce the average NRMS error for depth estimation in a silicone colon phantom by 38% and in a pig colon by 31%.
Endoscope size is a major design constraint that must be managed with the clinical demand for high-quality illumination and imaging. Existing commercial endoscopes most often use an arc lamp to produce bright, incoherent white light, requiring large-area fiber bundles to deliver sufficient illumination power to the sample. Moreover, the power instability of these light sources creates challenges for computer vision applications. We demonstrate an alternative illumination technique using red-green-blue laser light and a data-driven approach to combat the speckle noise that is a byproduct of coherent illumination. We frame the speckle artifact problem as an image-to-image translation task solved using conditional Generative Adversarial Networks (cGANs). To train the network, we acquire images illuminated with a coherent laser diode, with a laser diode source made partially- coherent using a laser speckle reducer, and with an incoherent LED light source as the target domain. We train networks using laser-illuminated endoscopic images of ex-vivo, porcine gastrointestinal tissues, augmented by images of laser-illuminated household and laboratory objects. The network is then benchmarked against state of-the-art optical and image processing speckle reduction methods, achieving an increased peak signal-to-noise ratio (PSNR) of 4.1 db, compared to 0.7 dB using optical speckle reduction, 0.6 dB using median filtering, and 0.5 dB using non-local means. This approach not only allows for endoscopes with smaller, more efficient light sources with extremely short triggering times, but it also enables imaging modalities that require both coherent and incoherent sources, such as combined widefield and speckle ow contrast imaging in a single image frame.
Colorectal cancer accounts for an estimated 8% of cancer deaths in the United States with a five-year survival rate of 55-75%. The early detection and removal of precancerous lesions is critical for reducing mortality, but subtle neoplastic growths, such as non-polypoid lesions, often go undetected during routine colonoscopy. Current approaches to flat or depressed lesion detection are ineffective due to the poor contrast of subtle features in white light endoscopy. Towards improving colorectal lesion contrast, we present an endoscopic light source with custom laser channels for multimodal color, topographic, and speckle contrast flow imaging. Three red-green-blue laser units, paired with laser speckle reducers, are coupled into endoscopic fiber optic light guides in a benchtop endoscope. Tissue phantom topography is reconstructed using alternating illumination of the laser units and a photometric stereo endoscopy algorithm. The contrast of flow regions is enhanced in an optical flow phantom using laser speckle contrast imaging. Further, the system retains the ability to offer white light and narrow band illumination modes with improved power efficiency, a reduced size, and longer lifetimes compared to conventional endoscopic arc lamp sources. This novel endoscopic light source design shows promise for increasing the detection of subtle lesions in routine colonoscopy screening.
Wavefront sensing is typically accomplished with a Shack-Hartmann wavefront sensor (SHWS), where a CCD or CMOS is placed at the focal plane of a periodic, microfabricated lenslet array. Tracking the displacement of the resulting spots in the presence of an aberrated wavefront yields measurement of the relative wavefront introduced. A SHWS has a fundamental tradeoff between sensitivity and range, determined by the pitch and focal length of its lenslet array, such that the number of resolvable tilts is a constant. Recently, diffuser wavefront sensing (DWS) has been demonstrated by measuring the lateral shift of a coherent speckle pattern using the concept of the diffuser memory effect. Here we demonstrate that tracking distortions of the non-periodic caustic pattern produced by a holographic diffuser allows accurate autorefraction of a model eye with a number of resolvable tilts that extends beyond the fundamental limit of a SHWS. Using a multi-level Demon’s image registration algorithm, we are able to demonstrate that a DWS demonstrates a 2.5x increase in number of resolvable prescriptions as compared to a conventional SHWS while maintaining acceptable accuracy and repeatability for eyeglass prescriptions. We evaluate the performance of a DWS and SHWS in parallel with a coherent laser diode without (LD) and with a laser speckle reducer (LD+LSR), and an incoherent light-emitting diode (LED), demonstrating caustic-tracking is compatible with coherent and incoherent sources. Additionally, the DWS diffuser costs 40x less than a SHWS lenslet array, enabling affordable large-dynamic range autorefraction without moving parts.
Lumbar punctures (LPs) are interventional procedures that are used to collect cerebrospinal fluid. Since the target window is small, physicians have limited success conducting the procedure. The procedure is especially difficult for obese patients due to the increased distance between bone and skin surface. We propose a simple and direct needle insertion platform, enabling image formation by sweeping a needle with a single ultrasound element at the tip. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle, such as bone, but also visually locate the structures by combining transducer location tracking and synthetic aperture focusing. The concept of the system was validated through a simulation that revealed robust image reconstruction under expected errors in tip localization. The initial prototype was built into a 14 G needle and was mounted on a holster equipped with a rotation shaft allowing one degree-of-freedom rotational sweeping and a rotation tracking encoder. We experimentally evaluated the system using a metal-wire phantom mimicking high reflection bone structures and human spinal bone phantom. Images of the phantoms were reconstructed, and the synthetic aperture reconstruction improved the image quality. These results demonstrate the potential of the system to be used as a real-time guidance tool for improving LPs.
Colorectal cancer is the second leading cause of cancer deaths in the United States. Identifying and removing premalignant lesions via colonoscopy can significantly reduce colorectal cancer mortality. Unfortunately, the protective value of screening colonoscopy is limited because more than one quarter of clinically-important lesions are missed on average. Most of these lesions are associated with characteristic 3D topographical shapes that appear subtle to a conventional colonoscope. Photometric stereo endoscopy captures this 3D structure but is inherently qualitative due to the unknown working distances from each point of the object to the endoscope. In this work, we use deep learning to estimate the depth from a monocular endoscope camera. Significant amounts of endoscopy data with known depth maps is required for training a convolutional neural network for deep learning. Moreover, this training problem is challenging because the colon texture is patient-specific and cannot be used to efficiently learn depth. To resolve these issues, we developed a photometric stereo endoscopy simulator and generated data with ground truth depths from a virtual, texture-free colon phantom. These data were used to train a deep convolutional neural field network that can estimate the depth for test data with an accuracy of 84%. We use this depth estimate to implement a smart photometric stereo algorithm that reconstructs absolute depth maps. Applying this technique to an in-vivo human colonoscopy video of a single polyp viewed at varying distance, initial results show a reduction in polyp size measurement variation from 15.5% with conventional to 3.4% with smart photometric reconstruction.
For decades, the incidence of esophageal adenocarcinoma (EAC) has risen, while the long-term survival rate remains poor. The progression of EAC is marked by superficial changes in cell and tissue microstructure. Early detection of EAC can reduce mortality, but current screening techniques require extensive biopsies because these tissue changes are invisible to conventional endoscopy. Optical coherence tomography (OCT) is being commercialized for screening and guiding biopsies, but is expensive and requires scanning a small beam across the entire surface of the esophagus. Spatial frequency domain imaging (SFDI) can capture microscopic tissue signatures over a wide field of view. However, conventional SFDI integrates signal from many millimeters deep into tissue, which is beyond the depth that OCT and histology observe abnormalities. We are developing a sub-diffuse SFDI system that measures the reflectance of tissues from spatial frequencies of 0 to 0.5 mm⁻¹. Optical property maps of absorption, reduced scattering, and qualitative scattering phase function differences are extracted using diffuse and Monte Carlo models. Scattering phase sensitivity was validated in agar phantoms containing polystyrene beads with a distribution of diameters. Varying the fractal dimensions from 3.50 to 4.25, our reflected measurements varied by 41% for a constant scattering coefficient of 0.6 mm⁻¹ at 851 nm and 0.5 mm⁻¹ spatial frequency. This approach was piloted in ex-vivo porcine tissue, where we observed strong scattering contrast between the esophagus, gastroesophageal junction, and stomach tissue. Future work will measure optical properties of ex-vivo human tissue to guide the design of an endoscope-compatible system.
Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.
Lumbar punctures (LPs) are interventional procedures used to collect cerebrospinal fluid (CSF), a bodily fluid needed to
diagnose central nervous system disorders. Most lumbar punctures are performed blindly without imaging guidance.
Because the target window is small, physicians can only accurately palpate the appropriate space about 30% of the time
and perform a successful procedure after an average of three attempts. Although various forms of imaging based
guidance systems have been developed to aid in this procedure, these systems complicate the procedure by including
independent image modalities and requiring image-to-needle registration to guide the needle insertion. Here, we propose
a simple and direct needle insertion platform utilizing a single ultrasound element within the needle through dynamic
sensing and imaging. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a
potential obstacle such as bone, but also visually locate structures by combining transducer location tracking and back
projection based tracked synthetic aperture beam-forming algorithm. The concept of the system was validated through
simulation first, which revealed the tolerance to realistic error. Then, the initial prototype of the single element
transducer was built into a 14G needle, and was mounted on a holster equipped with a rotation tracking encoder. We
experimentally evaluated the system using a metal wire phantom mimicking high reflection bone structures and an actual
spine bone phantom with both the controlled motion and freehand scanning. An ultrasound image corresponding to the
model phantom structure was reconstructed using the beam-forming algorithm, and the resolution was improved
compared to without beam-forming. These results demonstrated the proposed system has the potential to be used as an
ultrasound imaging system for lumbar puncture procedures.
The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ∼2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds.
KEYWORDS: Optical simulations, Beam shaping, Skin, Multiphoton microscopy, Point spread functions, Light scattering, Luminescence, Microscopes, Signal attenuation, Monte Carlo methods
Multiphoton fluorescence microscopy (MPM) is a method for high resolution, non-invasive investigations of biological tissue. The aim of introducing an annular shaped laser beam is to reduce the ouf-of-focus generated background signal improving imaging of light scattering tissue such as human skin. Simulations show that 50% of the beam radius can be blocked, while preserving the shape of the point spread function. Initial experiments performed on a phantom consisting of fluorescein and fluorescent beads embedded in agar by using a custom built MPM-set up show that by introducing a simple beam blocker to create an annular beam, the background signal is reduced with approximately 5%. Future work will include optimizing the set up, and creating phantoms with more light scattering properties.
Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of
the field of view simultaneously with a conventional color image. Here we describe a system that will enable
photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric
stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and
processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an
alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that
can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting
endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video
processor, captures topography and color images at 15 Hz, and displays the conventional color image to the
gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon
phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo
evaluation of photometric stereo endoscopy in the human large intestine.
While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging.
Oxygenation measurements are widely used in patient care. However, most clinically available instruments currently consist of contact probes that only provide global monitoring of the patient (e.g., pulse oximetry probes) or local monitoring of small areas (e.g., spectroscopy-based probes). Visualization of oxygenation over large areas of tissue, without a priori knowledge of the location of defects, has the potential to improve patient management in many surgical and critical care applications. In this study, we present a clinically compatible multispectral spatial frequency domain imaging (SFDI) system optimized for surgical oxygenation imaging. This system was used to image tissue oxygenation over a large area (16×12 cm) and was validated during preclinical studies by comparing results obtained with an FDA-approved clinical oxygenation probe. Skin flap, bowel, and liver vascular occlusion experiments were performed on Yorkshire pigs and demonstrated that over the course of the experiment, relative changes in oxygen saturation measured using SFDI had an accuracy within 10% of those made using the FDA-approved device. Finally, the new SFDI system was translated to the clinic in a first-in-human pilot study that imaged skin flap oxygenation during reconstructive breast surgery. Overall, this study lays the foundation for clinical translation of endogenous contrast imaging using SFDI.
Introduction: Two major disadvantages of currently available oxygenation probes are the need for contact with the skin
and long measurement stabilization times. A novel oxygenation imaging device based on spatial frequency domain and
spectral principles has been designed, validated preclinically on pigs, and validated clinically on humans. Importantly,
this imaging system has been designed to operate under the rigorous conditions of an operating room. Materials and
Methods: Optical properties reconstruction and wavelength selection have been optimized to allow fast and reliable
oxyhemoglobin and deoxyhemoglobin imaging under realistic conditions. In vivo preclinical validation against
commercially available contact oxygenation probes was performed on pigs undergoing arterial and venous occlusions.
Finally, the device was used clinically to image skin flap oxygenation during a pilot study on women undergoing breast
reconstruction after mastectomy. Results: A novel illumination head containing a spatial light modulator (SLM) and a
novel fiber-coupled high power light source were constructed. Preclinical experiments showed similar values between
local probes and the oxygenation imaging system, with measurement times of the new system being < 500 msec. During
pilot clinical studies, the imaging system was able to provide near real-time oxyHb, deoxyHb, and saturation
measurements over large fields of view (> 300 cm2). Conclusion: A novel optical-based oxygenation imaging system has
the potential to replace contact probes during human surgery and to provide quantitative, wide-field measurements in
near real-time.
Endogenous fluorescence provides morphological, spectral, and lifetime contrast that can indicate disease states in tissues. Previous studies have demonstrated that two-photon autofluorescence microscopy (2PAM) can be used for noninvasive, three-dimensional imaging of epithelial tissues down to approximately 150 μm beneath the skin surface. We report ex-vivo 2PAM images of epithelial tissue from a human tongue biopsy down to 370 μm below the surface. At greater than 320 μm deep, the fluorescence generated outside the focal volume degrades the image contrast to below one. We demonstrate that these imaging depths can be reached with 160 mW of laser power (2-nJ per pulse) from a conventional 80-MHz repetition rate ultrafast laser oscillator. To better understand the maximum imaging depths that we can achieve in epithelial tissues, we studied image contrast as a function of depth in tissue phantoms with a range of relevant optical properties. The phantom data agree well with the estimated contrast decays from time-resolved Monte Carlo simulations and show maximum imaging depths similar to that found in human biopsy results. This work demonstrates that the low staining inhomogeneity (∼20) and large scattering coefficient (∼10 mm−1) associated with conventional 2PAM limit the maximum imaging depth to 3 to 5 mean free scattering lengths deep in epithelial tissue.
We demonstrate the use of gold nanorods as molecularly targeted contrast agents for two-photon luminescence (TPL)
imaging of cancerous cells 150 μm deep inside a tissue phantom. We synthesized gold nanorods of 50 nm x 15 nm size
with a longitudinal surface plasmon resonance of 760 nm. Gold nanorods were conjugated to antibodies against
epidermal growth factor receptor (EGFR) and labeled to A431 human epithelial skin cancer cells in a collagen matrix
tissue phantom. Using a 1.4 NA oil immersion objective lens, we found that excitation power needed for similar
emission intensity in TPL imaging of labeled cells was up to 64 times less than that needed for two-photon
autofluorescence (TPAF) imaging of unlabeled cells, which would correspond to a more than 4,000 times increase in
emission intensity under equal excitation energy. However, the aberrations due to refractive index mismatch of the
immersion oil and the sample limit imaging depth to 75 μm. Using a 0.95 NA water immersion objective lens, we
observe robust two-photon emission signal from gold nanorods in the tissue phantoms from at depths of up to 150 μm.
Furthermore, the increase in excitation energy required to maintain a constant emission signal intensity as imaging depth
was increased was the same in both labeled and unlabeled phantom, suggesting that at the concentrations used, the
addition of gold nanorods did not appreciably increase the bulk scattering coefficient of the sample. The remarkable TPL
brightness of gold nanorods in comparison to TPAF signal makes them an attractive contrast agent for early detection of
cutaneous melanoma.
We demonstrate the use of gold nanorods as molecularly targeted contrast agents for two-photon luminescence (TPL)
imaging of cancerous cells 150 µm deep inside a tissue phantom. We synthesized gold nanorods of 50 nm x 15 nm size
with a longitudinal surface plasmon resonance of 760 nm. Gold nanorods were conjugated to antibodies against
epidermal growth factor receptor (EGFR) and labeled to A431 human epithelial skin cancer cells in a collagen matrix
tissue phantom. Using a 1.4 NA oil immersion objective lens, we found that excitation power needed for similar
emission intensity in TPL imaging of labeled cells was up to 64 times less than that needed for two-photon
autofluorescence (TPAF) imaging of unlabeled cells, which would correspond to a more than 4,000 times increase in
emission intensity under equal excitation energy. However, the aberrations due to refractive index mismatch of the
immersion oil and the sample limit imaging depth to 75 µm. Using a 0.95 NA water immersion objective lens, we
observe robust two-photon emission signal from gold nanorods in the tissue phantoms from at depths of up to 150 µm.
Furthermore, the increase in excitation energy required to maintain a constant emission signal intensity as imaging depth
was increased was the same in both labeled and unlabeled phantom, suggesting that at the concentrations used, the
addition of gold nanorods did not appreciably increase the bulk scattering coefficient of the sample. The remarkable TPL
brightness of gold nanorods in comparison to TPAF signal makes them an attractive contrast agent for early detection of
cutaneous melanoma.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.