Recent advances in multiframe blind deconvolution of ground based telescopes are presented. The paper focuses on practical aspects of the software and algorithm. (1) A computer simulation that models atmospheric turbulence, noise and other aspects, for testing and evaluation of the deconvolution system are explained. (2) A post-processing algorithm that corrects for glint due to specular and other bright reflections is presented. This glint correction is automated by a spatially adaptive scheme that calculates statistics of brightness levels. (3) Efforts to realize computational speed, wherein processing happens on-the-fly at streaming frame rates are underway. The massively parallel processing of graphical processing units (GPUs) and the Compute Unified Device Architecture (CUDA) language are used.
Recent advances are presented for multiframe blind deconvolution (MFBD) of ground based telescope imagery for low-earth orbit objects. The iterative algorithm uses the maximum likelihood estimation optimization criterion. It is modeled from a previous well-known algorithm called the expectation-maximization (EM) algorithm. New renditions of the algorithm simplify the phase reconstruction, thereby reducing the complexity of the original EM algorithm. Examples are shown, with and without adaptive optics (AO). The system is being designed for on-the-fly streaming video operation.
It is generally believed that photoreceptor integrity is related to the ellipsoid zone appearance in optical coherence tomography (OCT) B-scans. Algorithms and software were developed for viewing and analyzing the ellipsoid zone. The software performs the following: (a), automated ellipsoid zone isolation in the B-scans, (b), en-face view of the ellipsoid-zone reflectance, (c), alignment and overlay of (b) onto reflectance images of the retina, and (d), alignment and overlay of (c) with microperimetry sensitivity points. Dataset groups were compared from normal and dry age related macular degeneration (DAMD) subjects. Scalar measurements for correlation against condition included the mean and standard deviation of the ellipsoid zone’s reflectance. The imageprocessing techniques for automatically finding the ellipsoid zone are based upon a calculation of optical flow which tracks the edges of laminated structures across an image. Statistical significance was shown in T-tests of these measurements with the population pools separated as normal and DAMD subjects. A display of en-face ellipsoid-zone reflectance shows a clear and recognizable difference between any of the normal and DAMD subjects in that they show generally uniform and nonuniform reflectance, respectively, over the region near the macula. Regions surrounding points of low microperimetry (μP) sensitivity have nonregular and lower levels of ellipsoid-zone reflectance nearby. These findings support the idea that the photoreceptor integrity could be affecting both the ellipsoid-zone reflectance and the sensitivity measurements.
KEYWORDS: Point spread functions, Principal component analysis, 3D image reconstruction, 3D image processing, Reconstruction algorithms, Absorption, Microscopes, Deconvolution, 3D modeling, Objectives
3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample’s 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.
Dynamic indocyanine green imaging uses a scanning laser ophthalmoscope and a fluorescent dye to produce movies of the dye-filling pattern in the retina and choroid of the eye. It is used for evaluating choroidal neovascularization. Movies are examined to identify the anatomy of the pathology for planning treatment and to evaluate progression or response to treatment. The popularity of this approach is affected by the complexity and difficulty in interpreting the movies. Software algorithms were developed to produce images from the movies that are easy to interpret. A mathematical model is formulated of the flow dynamics, and a fitting algorithm is designed that solves for the flow parameters. The images provide information about flow and perfusion, including regions of change between examinations. Imaged measures include the dye fill-time, temporal dispersion, and magnitude of the dye dilution temporal curves associated with image pixels. Cases show how the software can help to identify clinically relevant anatomy such as feeder vessels, drain vessels, capillary networks, and normal choroidal draining vessels. As a potential tool for research into the character of neovascular conditions and treatments, it reveals the flow dynamics and character of the lesion. Future varieties of this methodology may be used for evaluating the success of engineered tissue transplants, surgical flaps, reconstructive surgery, breast surgery, and many other surgical applications where flow, perfusion, and vascularity of tissue are important.
Movies acquired from fundus imaging using Indocyanine Green (ICG) and a scanning laser ophthalmoscope
provide information for identifying vascular and other retinal abnormalities. Today, the main limitation of this modality
is that it requires esoteric training for interpretation. A straightforward interpretation of these movies by objective
measurements would aid in eliminating this training barrier.
A software program has been developed and tested that produces and visualizes 2D maps of perfusion
measures. The program corrects for frame-to-frame misalignment caused by eye motion, including rigid misalignment
and warp. The alignment method uses a cross-correlation operation that automatically detects the distance due to motion
between adjacent frames. The d-ICG movie is further corrected by removing flicker and vignetting artifacts. Each pixel
in the corrected movie sequence is fit with a least-squares spline to yield a smooth intensity temporal profile. From the
dynamics of these intensity curves, several perfusion measures are calculated. The most effective of these measures
include a metric that represents the amount of time required for a vessel to fill with dye, a metric that represents the
diffusion of dye, and a metric that is affected by local blood volume. These metrics are calculated from movies acquired
before and after treatment for a neovascular condition. A comparison of these before and after measures may someday
provide information to the clinician that helps them to evaluate disease progression and response to treatment.
A fundus camera is an optical system designed to illuminate and image the retina while minimizing stray light and backreflections.
Modifying such a device requires characterization of the optical path in order to meet the new design goals
and avoid introducing problems. This work describes the characterization of one system, the Topcon TRC-50F,
necessary for converting this camera from film photography to spectral imaging with a CCD. This conversion consists of
replacing the camera's original xenon flash tube with a monochromatic light source and the film back with a CCD. A
critical preliminary step of this modification is determining the spectral throughput of the system, from source to sensor,
and ensuring there are sufficient photons at the sensor for imaging. This was done for our system by first measuring the
transmission efficiencies of the camera's illumination and imaging optical paths with a spectrophotometer. Combining
these results with existing knowledge of the eye's reflectance, a relative sensitivity profile is developed for the system.
Image measurements from a volunteer were then made using a few narrowband sources of known power and a calibrated
CCD. With these data, a relationship between photoelectrons/pixel collected at the CCD and narrowband illumination
source power is developed.
A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient.
A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.
Retinal thickness maps obtained using a scanning laser ophthalmoscope are useful in the diagnosis of macular edema and
other diseases that cause changes in the retinal thickness. However, the thickness measurements are adversely affected
by the presence of blood vessels. This paper studies the effect that the blood vessels have on the computation of the
retinal thickness. The retinal thickness is estimated using maximum-likelihood resolution with anatomical constraints.
The blood vessels are segmented using local image features. Comparison of the retinal thickness with and without the
blood vessel removal is made using correlation coefficient and I-divergence.
A multistage algorithm is presented, whose components are based upon maximum likelihood estimation (MLE). From
3D scanning laser ophthalmoscope (SLO) image data, the algorithm finds the positions of the two anatomical boundaries
of the eye's fundus that define the retina, which are the internal limiting membrane (ILM) and the retinal pigment
epithelium (RPE). he retinal thickness is then calculated by subtraction. Retinal thickness is useful for indicating,
assessing risk of, and following several diseases, including various forms of macular edema and cysts.
KEYWORDS: Deconvolution, Super resolution, Point spread functions, Microscopes, 3D image processing, Microscopy, Signal to noise ratio, Cameras, Diffraction, Image processing
Optical light microscopy is a predominant modality for imaging living cells, with the maximum resolution typically diffraction limited to approximately 200nm. The objective of this project is to enhance the resolution capabilities of optical light microscopes using image-processing algorithms, to produce super-resolved imagery at a sub-pixel level. The sub-pixel algorithm is based on maximum-likelihood iterative deconvolution of photon-limited data, and reconstructs the image at a finer scale than the pixel limitation of the camera. The software enhances the versatility of light microscopes, and enables the observation of sub-cellular components at a resolution two to three times finer than previously. Adaptive blind deconvolution is used to automatically determine the point spread function from the observed data. The technology also allows camera-binned or sub-sampled (aliased) data to be correctly processed. Initial investigations used computer simulations and 3D imagery from widefield epi-fluorescence light microscopy.
A complete system for object segmentation, counting, quantification, and tracking from microscopic images was implemented. We found that image deconvolution and reconstruction operations are essential to the success of any general-purpose segmentation algorithm and hence are of paramount importance for a counting and tracking software system. Wavelet-based image enhancement, background equalization, and noise suppression routines are the components in our novel general-purpose segmentation algorithm. Simple object recognition based on averages and preset tolerances suffices for most applications. As expected, boundary smoothing is important if watershed-based blob separation is to be used. One of the challenges of a general-purpose counting and tracking system is the need for a large number of object quantification components (features). In tracking we found that incorporating weighted features into an error function improves the accuracy over just the path coherence criterion and that evaluating correspondences over multiple time frames improves the accuracy over using only two consecutive time frames.
Fluorescence resonance energy transfer (FRET) is a fluorescence microscope imaging process involving nonradiative energy transfer between two fluorophores (the donor and the acceptor). FRET is used to detect the chemical interactions and, in some cases, measure the distance between molecules. Existing approaches do not always well compensate for bleed-through in excitation, cross-talk in emission detection and electronic noise in image acquisition. We have developed a system to automatically search for maximum-likelihood estimates of the FRET image, donor concentration and acceptor concentration. It also produces other system parameters, such as excitation/emission filter efficiency and FRET conversion factor. The mathematical model is based upon a Poisson process since the CCD camera is a photon-counting device. The main advantage of the approach is that it automatically compensates for bleed-through and cross-talk degradations. Tests are presented with synthetic images and with real data referred to as positive and negative controls, where FRET is known to occur and to not occur, respectively. The test results verify the claimed advantages by showing consistent accuracy in detecting FRET and by showing improved accuracy in calculating FRET efficiency.
A deconvolution algorithm for use with scanning laser ophthalmoscope (SLO) data is being developed. The SLO is fundamentally a confocal microscope in which the objective lens is the human ocular lens. 3D data is collected by raster scanning to form images at different depths in retinal and choroidal layers. In this way, 3D anatomy may be imaged and stored as a series of optical sections.Given the poor optical quality of the human lens and random eye motion during data acquisition, any deconvolution method applied to SLO data must be able to account for distortions present in the observed data. The algorithm presented compensates for image warping and frame-to-frame displacement due to random eye motion, smearing along the optic axis, sensor saturation, and other problems. A preprocessing step is first used to compensate for frame-to-frame image displacement. The image warping, caused by random eye motion during raster scanning, is corrected. Finally, a maximum likelihood based blind deconvolution algorithm is used to correct severe blurring along the optic axis. The blind deconvolution algorithm contains an iterative search for subpixel displacements remaining after image warping and frame-to-frame displacements are corrected. This iterative search is formulated to ensure that the likelihood functional is non-decreasing.
A cooled or video-rate CCD camera is often used to collect optical slices from many modalities of microscopy including widefield-fluorescence (WF) and transmitted-light brightfield (TBL). Raw optical slices collected from both of these types of cameras are contaminated by imperfect performance of the CCD camera. Optical slice image data must be corrected for the bias level and nonuniformity in the photometric response of CCD elements. Fluctuation of the exposure time in a cooled CCD camera needs to be calibrated as well. Bad pixels due to the severely low sensitivity of some of the CCD elements often occur and otherwise impair the conventional schemes of correction and calibration. An adaptive median filter has been introduced by us previously to treat these bad pixels. In this study a Wiener-type regularization scheme is proposed as robust, fast, and practical alternative treatment of these bad pixels. One advantage of using this scheme is that doing so provides a nicely modularized design of the computer algorithm. As such, the correction scheme is software readily integrated as a software component in a 3D microscopy system and may operate independently without affecting the design of other software components in the system.
The Maximum Likelihood based blind deconvolution (ML-blind) algorithm is used to deblur 3D microscope images. This approach was first introduced to the microscope community by us circa 1992. The basic advantage of a blind algorithm is that it simplifies the user interface protocols and reconstructs both the object and the Point Spread Function. In this paper we will discuss the recent improvements to the algorithm that robustize the performance and accelerate the speed of convergence. For instance, powerful and physically justified constraints are enforced on the reconstructed PSF at every iteration for robustization. A line search technique is added to the object reconstruction to accelerate the convergence of the object estimate. A simple modification to the algorithm enables adaptation for the transmitted light brightfield modality. Finally, we incorporate montaging in order to process large data fields.
Blind deconvolution algorithms are being developed for reconstructing (deblurring) 2D and 3D optically sectioned light micrographs, including widefield fluorescence, transmitted light brightfield and confocal fluorescence micrographs. The blind deconvolution concurrently reconstructs the point spread function (PSF) with the image data. This is important because it obviates the necessity to measure the PSF. The PSF measurement is esoteric and sometimes impossible and it thereby will hinder wide routine biological and clinical usage. The iterative algorithms are primarily based on a stochastic model of the physics of fluorescence quantum photons and the optimization criterion of maximum likelihood estimation (MLE), as extended from precursory nuclear medicine MLE algorithms. The algorithm design is mostly model based, although it contains some non- model-based components which have important practical benefits.
This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.
We have developed image reconstruction algorithms for generating 3-D renderings of biological specimens from brightfield micrographs. The algorithm presented here is founded on the maximum likelihood estimation theory where steepest ascent and conjugate gradient techniques are used to optimize the solution to the multidimensional equation. The estimation problem posed is that of reconstructing the optical density, or linear attenuation coefficients, similar to that of computed tomography, under the simplifying assumption of geometric optics. We assume white Gaussian noise corrupts the signal generating a Gaussian distributed signal according to the modeling of the system impulse response. One of the challenges of the algorithms presented here is in restoring the values within the missing cone region of the system optical transfer function. The algorithm and programming are straightforward and incorporate standard Fourier techniques. The theoretical development of the algorithms is outlined. Simulations of reconstructions using this technique are currently being performed.
KEYWORDS: 3D image processing, 3D modeling, 3D image reconstruction, Image restoration, Point spread functions, Data corrections, Luminescence, Expectation maximization algorithms, Laser systems engineering, Microscopy
The image reconstruction method of maximum likelihood estimation (MLE) has been used in the authors' previous work for fluorescence microscopy. By computer simulations, it was previously found that this method worked very well for addressing two dimensional superresolution and three dimensional optical sectioning problems. In this paper, the results of 3D reconstructions with real data will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.