Cramér-Rao lower bound (CRB) theory can be used to calculate algorithm-independent lower bounds to the variances of
parameter estimates. It is well known that the CRBs are achievable by algorithms only when the parameters can be
estimated with sufficiently-high signal-to-noise ratios (SNRs). Otherwise, the CRBs are still lower bounds, but there can
be a large gap between the CRBs and the variances that can be achieved by algorithms. We present results from our
initial investigations into the SNR dependence of the achievability of the CRBs by multi-frame blind deconvolution
(MFBD) algorithms for high-resolution imaging in the presence of atmospheric turbulence and sensor noise. With the
use of sample statistics, we give examples showing that the minimum SNR value for which the CRBs can be achieved
by our MFBD algorithm typically ranges between one and five, depending upon the strength of the prior knowledge used
in the algorithm and the SNRs in the measured data.
An analytical signal-to-noise ratio (SNR) expression is derived for unbiased estimates of energy spectra obtained using
multi-frame blind deconvolution (MFBD) algorithms. Because an analytical variance expression cannot, in general, be
derived, Cramér-Rao lower bounds are used in place of the variances. As a result, the SNR expression provides upper
bounds to the achievable SNRs that are independent of the MFBD algorithm implementation. The SNR expression is
evaluated for the scenario of ground-based imaging of astronomical objects. It is shown that MFBD energy-spectrum
SNRs are usually greater, and often much greater, than their corresponding speckle imaging (SI) energy-spectrum SNRs
at all spatial frequencies. One reason for this SNR disparity is that SI energy spectrum SNRs are proportional to the
object energy spectrum and the ensemble-averaged atmosphere energy spectrum, while MFBD SNRs are approximately
proportional to the square root of these quantities. Another reason for this SNR disparity is that single-frame SI energy-spectrum
SNRs are limited above by one, while the MFBD energy-spectrum SNRs are not.
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of
the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution
algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from
the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to
deblur an image depending how many frames of data are used for the deblurring and the platforms on which the
algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its
execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a
specific computer hardware architecture.
Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an
object from one or more measurement frames of that are blurred and noisy realizations of that object. The blind nature
of MFBD algorithms permits the reconstruction process to proceed without having separate measurements or
knowledge of the blurring functions in each of the measurement frames. This is accomplished by estimating the object
common to all the measurement frames jointly with the blurring functions that are different from frame to frame. An
issue of key importance is understanding how accurately the object pixel intensities can be estimated with the use of
MFBD algorithms. Here we present algorithm-independent lower bounds to the variances of estimates of the object
pixel intensities to quantify the accuracy of these estimates when the blurring functions are estimated pixel by pixel.
We employ support constraints on both the object and the blurring functions to aid in making the inverse problem
unique. The lower bounds are presented as a function of the sizes and shapes of these support regions and the number
of measurement frames.
Conclusions about the usefulness of mean-squared error for predicting visual image quality are presented in this paper. A standard imaging model was employed that consisted of: an object, point spread function, and noise. Deconvolved reconstructions were recovered from blurred and noisy measurements formed using this model. Additionally, image reconstructions were regularized by classical Fourier-domain filters. These post-processing steps generated the basic components of mean-squared error: bias and pixel-by-pixel noise variances. Several Fourier domain regularization filters were employed so that a broad range of bias/variance tradeoffs could be analyzed. Results given in this paper show that mean-squared error is a reliable indicator of visual image quality only when the images being compared have approximately equal bias/variance ratios.
Superresolution of images by data inversion is defined as extrapolating measured Fourier data into regions of Fourier space where no measurements have been taken. This type of superresolution can only occur by data inversion. There exist two camps of thought regarding the efficacy of this type of superresolution: the first is that meaningful superresolution is unachievable due to signal-to-noise limitations, and the second is that meaningful superresolution is possible. Here we present a framework for describing superresolution in a way that accommodates both points of view. In particular, we define the twin concepts of primary and secondary superresolution and show that the first camp is referring to primary superresolution while the second group is referring to secondary superresolution. We discuss the implications of both types of superresolution on the ability of data inversion to achieve meaningful superresolution.
The usefulness of support constraints to achieve noise reduction in images is analyzed here using an algorithm-independent Cramer-Rao bound approach. Recently, it has been shown that the amount of noise reduction achievable using support as a constraint is a function of the image-domain noise correlation properties. For image-domain delta-correlated noise sources (such as Poisson and CCD read noise), applying a support constraint does not reduce noise in the absence of deconvolution due to the lack of spatial correlation. However, when deconvolution is included in the image processing algorithm, the situation changes significantly because the deconvolution operation imposes correlations in the measurement noise. Here we present results for an invertible system blurring function showing how noise reduction occurs with support and deconvolution. In particular, we show that and explain why noise reduction preferentially occurs at the edges of the support constraint.
We analyze the quality of reconstructions obtained when using the multi-frame blind deconvolution (MFBD) algorithm and the bispectrum algorithm to reconstruct images from atmospherically-degraded data that are corrupted by detector noise. In particular, the quality of reconstructions is analyzed in terms of the fidelity of the estimated Fourier phase spectra. Both the biases and the mean square phase errors of the Fourier spectra estimates are calculated and analyzed. The reason that the comparison is made by looking at the Fourier phase spectra is because both the MFBD and bispectrum algorithms can estimate Fourier phase information from the image data itself without requiring knowledge of the system transfer function, and because Fourier phase plays a dominant role in image quality. Computer-simulated data is used for the comparison in order to be able to calculate true biases and mean square errors in the estimated Fourier phase spectra. For the parameters in this study, the bispectrum algorithm produced less-biased phase estimates in all cases than the MFBD algorithm. The MFBD algorithm produced mean square phase errors comparable to or lower than the bispectrum algorithm for good seeing and few data frames, while the converse is true for many data frames and poor seeing.
How to obtain sharp images when viewing through a turbid medium is a problem that arises in a number of applications, including optical biomedical imaging and optical surveillance in the presence of clouds. The main problem with this type of imagery is that it is difficult to accurately characterize the turbid medium sufficiently well to generate a point spread function that can be used to deconvolve the blurred data (and thus increase the resolution). We discuss the use of blind deconvolution as a means of estimating both the blur-free target and the system point spread function. We compare restorations obtained using a non-linear blind deconvolution algorithm with those obtained using a linear backpropagation algorithm. Preliminary results indicate that the blind deconvolution algorithm produces the more visually pleasing restorations. Moreover, it does so without requiring any prior knowledge of the characteristics of the turbid medium, or of what the blur-free target should look like: an important advance over the backpropagation algorithm.
This paper discussed a simulation fo the imaging of a space- based object through cirrus clouds. The wavefront reflected by the object is propagated to the top of the cloud using Huygens-Fresnel propagation theory. At the top of the cloud, the wavefront is divided into an array of input rays, which are in turn transmitted through the cloud model using the CIRIS-C software. At the bottom of the cloud, the output ray distribution is used to reconstruct a wavefront that continues propagating to the ground receiver. Images of the object as seen through cirrus clouds with different optical depths are compared to a diffraction-limited image. Turbulence effects from the atmospheric propagation are not included.
KEYWORDS: Clouds, Modulation transfer functions, Wavefronts, Optical transfer functions, 3D modeling, Crystals, Wave propagation, Ray tracing, Monte Carlo methods, Statistical modeling
Statistical variations in the size, position, shape and orientation of hexagonal ice crystals in a 3D volume complicates the modeling of image transfer through cirrus clouds. These variations will give rise to fluctuations in image quality as measured by the MTF of a single realization of a cloud/receiver combination. Computing the average MTF from several realizations of the cloud/receiver combination, allowing for a different cloud composition on every realization can alleviate these fluctuations. In this paper, we present the result of the multiple MTF for clouds of different optical depths, distinguishing the separate contributions of scattered as well as unscattered light. Finally, we apply the MTF thus generated to create images of objects as seen through the cloud.
We apply out previously-developed turbid-media backpropagation algorithm to imaging extended objects imbedded in turbid media such as clouds. Although the backpropagation algorithm was developed initially for biomedical applications, the underlying development is general enough to encompass imaging objects imbedded in any sort of turbid media whose scattering properties dominate their absorption properties. For non-biomedical applications, imaging data is usually obtained only for a limited number of view angles. As a result, we look at the potential of the backpropagation algorithm to reconstruct an image of an object, imbedded in a cloud, from a single view. Using both computer-simulated data and laboratory data, we show that the backpropagation algorithm successfully increases resolution in these types of images. Because the backpropagation algorithm incorporates a depth-dependent deconvolution filter, it turns out that the optimal image quality obtained in the reconstruction occurs for the deconvolution filter which corresponds to the location of the object in the medium. This surprising result permits object localization in the range dimension even when the illuminating radiation is continuous-wave illumination, such as sunlight.
The authors present a wavefront reconstruction technique for beams forward scattered and back scattered through cirrus clouds. The technique uses ray distributions from the Coherent Illumination and Ray Trace Imaging Software for Cirrus which traces the propagation and E field vectors through a 3D volume of ice crystals in the shape of columns, plates, bullets, and bullet rosettes with random positions and polydisperse sizes and orientations. The wavefronts are then propagated to a telescope receiver on the ground and imaged in the receiver focal plate. A modification transfer function for each of these images is calculated and compared to the MTF for a diffraction-limited system.
The Air Force Research Laboratory is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar (ladar) system to be used, among other applications, for range-resolved imaging of orbiting satellites. A small-scale version of this system, the Heterodyne Imaging Laser Testbed (HILT), is used for obtaining pulsed reflection returns from targets that are located on the ground at a distance of approximately 1 km. Presented in this paper are a description of HILT and the preliminary results: image reconstructions of the ground targets using reflective tomographic techniques.
The Air Force Research Laboratory, Directed Energy Directorate, is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar system to be used for range-resolved imaging of orbiting satellites. This system, called HI-CLASS (High Performance CO2 Ladar Surveillance Sensor), uses a CO2 laser in a modelocked configuration to generate approximately 10 microsecond(s) bursts of approximately 1 ns pulses repeated at a 30 Hz rate. When reflected from an orbiting satellite, these pulses contain information about the range-resolved reflectivities and the Doppler spectrum of the target. For earth-stabilized satellites, cross-range motion is insufficient to produce Doppler-resolved images from the range-resolved data for the HI-CLASS system parameters. However, an image reconstruction method called reflection tomography can be used to reconstruct satellite images by using a tomographic reconstruction approach. An important issue in tomographic image reconstruction is correct registration of the individual projections. For accurate image reconstruction, all projections must be aligned to the target center of rotation. Due to typical system alignment uncertainties, atmospheric fluctuations, and random satellite displacements, range cannot be measured accurately enough to determine the satellite center of rotation. Therefore, this information must be inferred from the projection data itself. Here, we present an algorithm that uses a phase-retrieval approach to determine the required center of rotation from the projection data. We demonstrate the effectiveness of this algorithm using computer-simulated data. We also discuss the future application of this algorithm to actual ladar data.
The authors present a novel simulation for studying the interaction of coherent illumination with cirrus clouds. The software traces the propagation and E field vectors through a 3D volume of ice crystal in the shape of columns, plates, bullets, and bullet rosettes with random positions, sizes, and orientations. The magnetic (B) field vectors can be found from a cross product of the two. Back-scattered depolarization results are compared to published studies. The use of this simulation for detailed studies of the impact of cirrus clouds on the wavefront of an illuminating beam is discussed.
The effect of atmospheric phase perturbations on the diffractive and coherent properties of the uplink and downlink paths of an active imaging illumination beam has been studied in some detail. Similarly, the scattering and depolarization induced by water and ice cloud particles in the path of coherent laser illumination is currently an area of much production research. In contrast, the effect of cloud particles on the diffractive properties of a laser illumination beam has not received as much attention due primarily to the daunting mathematics of the physical mode. This paper seeks to address some of the mathematical issues associated with modeling the interaction of a coherent illumination beam with a cloud of ice particles. The simulation constructs a 3D model of a cirrus cloud consisting of randomly oriented hexagonal ice crystals in the shape of plates, columns, and bullet rosettes. The size, shape, and vertical distribution of the crystals are modeled after measured particles concentrations and distributions. An illumination pattern, in the form of grid of rays, is traced through the cloud, and the properties of the exiting wavefronts are analyzed.
The HI-CLASS is a high power, wideband, coherent laser radar (ladar) for long range detection, tracking, and imaging located at the Maui Space Surveillance Site. HI-CLASS will be used to provide high precision metrics as well as information for images of space objects and remote sensing with the same system. The four phases of the HI-CLASS hardware development program were completed in Fall 1997. During this development contract, hardware and software were developed for two different modes of operation; a ladar mode for active imaging of satellites, and a lidar model for remote sensing atmospheric measurements. Throughout the contract, data were collected which provided a demonstration of the system capabilities which validated technology and designs required for fielding operational systems. The HI- CLASS follow-on demonstration program is currently being performed under an Air Force contract. The follow-on demonstrations will provide the groundwork to an upgrade program currently under consideration by the Air Force. HI- CLASS provides high accuracy tracking in position and velocity simultaneously, and by ultimately providing size, shape and orientation information, it will help assess adversary capabilities. HI-CLASS has the potential to address operational areas of need for increased capability for information about space objects. The follow-on contract effort and the HI-CLASS upgrade effort will provide a demonstration of these potential applications of the HI- CLASS system.
The Air Force Phillips Laboratory is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar system to be used, among other applications, for range-resolved imaging of orbiting satellites. In this paper, we present our first satellite feature reconstruction from field results using reflective tomographic techniques.
Positivity and support have long been used to improve image quality beyond that achievable from the measured data alone. In this paper we analyze how positivity functions to reduce noise levels in measured Fourier data and the corresponding images. We show that positivity can be viewed as a signal- dependent support constraint, and thus it functions by enforcing Fourier-domain correlations. Using computer simulated data, we show the effects that positivity has upon measured Fourier data and upon images. We compare these results to equivalent result obtained using support as constraint. We show that support is a more powerful constraint than positivity in several ways: (1) more super- resolution is possible, (2) more Fourier domain noise reduction can occur, and (3) more image-domain noise reduction can occur.
The HI-CLASS is a high power, wideband, coherent laser radar for long range detection, tracking, and imaging located at the Maui Space Surveillance Site (MSSS). HI-CLASS will be used to provide high precision metrics as well as information for images of space objects and remote sensing with the same system. The HI-CLASS system is currently in the final of four phases. During Phase 1, breadboard hardware was built which led to a fully integrated laser radar system at MSSS. During Phase 2, an improved system oscillator and receiver-processor were built and integrated which brought the system capability to 12 Joules at 30 Hz. The Phase 3 system has the addition of a power amplifier to the transmitter which brought the system capability to 30 Joules Hz. The HI-CLASS system will validate technology and designs required for fielding operational systems since it has the potential to address operational areas of need for increased capability for information about space objects. HI-CLASS will provide high accuracy tracking in position and velocity simultaneously, and by ultimately providing size, shape and orientation information which will help assess adversary capabilities.
The Air Force Phillips Laboratory is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar system to be used, among other applications, for range-resolved imaging and orbital element set determination. It has been shown using theory and computer simulations that superior image quality is obtained by first converting the heterodyne returns into intensity projections before using tomographic techniques to reconstruct an image, as compared to using tomographic techniques on the E- field projections directly. In this paper, data from recent field experiments is used to validate this theory. In addition, the field data is used to determine the closing velocity of an orbiting satellite as a function of time.
The Air Force Phillips Laboratory is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar (ladar) system. Notable features of this ladar system include its narrow (< 1.5 ns) micropulses, contained in a pulse-burst waveform that allows high-resolution range data to be obtained, and its high power (30 J in a pulse burst), which permits reasonable signal returns from satellites. The usefulness of these range data for use in reflective tomographic reconstructions of satellite images is discussed. A brief review of tomography is given. Then it is shown that the ladar system is capable of providing adequate range-resolved data for reflective tomographic reconstructions in terms of range resolution and sampling constraints. Mathematical expressions are derived which can be used to convert the ladar returns into reflective projections. Image reconstructions from computer-simulated data which include the effects of laser speckle and photon noise are presented and discussed. These reconstructions contain artifacts even in the absence of noise, due to the inadequacies of the standard tomographic problem formulation to accurately model the reflective projections obtained from the ladar system. However, object features can still be determined from the reconstructions when typical noise levels are included in the simulation.
The Air Force Phillips Laboratory is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar (ladar) system. Because coherent detection is used and the range resolution is obtained by the narrow pulse widths, satellite images can be reconstructed in two ways. The first is to convert the ladar returns into intensity projections, and then use tomographic techniques to create the image. The second approach is to use synthetic aperture radar methods which reconstruct the E-field image of the object and then square the result to get an intensity image. In this paper, these two methods are compared using computer simulations. It is shown that images reconstructed from intensity projections are of much higher quality than images reconstructed from E-field projections. This is shown to be true for both high and low light levels, and for full or limited sets of views.
The Air Force Phillips Laboratory is in the process of demonstrating an advanced space surveillance capability with a heterodyne laser radar (ladar) system to be used, among other applications, for range-resolved imaging. Recently, image domain signal-to-noise rations (SNRs) have been derived both for the intensity projections calculated from the range-resolved reflective data and for image information obtained using linear combinations of the projections. Also, other recent results have indicated that superior image quality is obtained by first converting the heterodyne returns into intensity projections before using tomographic techniques to reconstruct an image, as intensity projections is validated using a laboratory heterodyne setup. In addition, the laboratory results are used to validate the conclusion that intensity projections provide superior image reconstructions.
Expressions for the detected signal reflected from an object using a coherent laser radar system are developed in this paper. Post-processing methods to obtain intensity projection data from the detected signals are derived. Image domain signal-to-noise ratios (SNRs) are derived both for images reconstructed using a convolution-backprojection algorithm from range- resolved reflective measurements and for the intensity projection data. The two noise sources considered are photon noise and laser speckle noise. It is shown that the SNR of an individual projection at a point is limited above by one, and the upper limit of the SNR of the reconstructed image is on the order of the square root of the number of projections used to create the image.
In this paper, the role positivity plays in error reduction in images is analyzed both theoretically and with computer simulations for the case of wide-sense-stationary Fourier- domain noise. It is shown that positivity behaves as a signal-dependent support constraint. As a result, the mechanism by which positivity results in noise reduction in images is by correlating measured Fourier spectra. An iterative linear algorithm is employed to enforce the positivity constraint in order to facilitate an image domain variance analysis as a function of the number of iterations of the algorithm. Noise reduction can occur only in the asymmetric part of the positivity-enforced support constraint when positivity is applied just as noise reduction only occurs in the asymmetric part of the true support constraint when support is applied. Unlike for support, noise decreases in the image domain in a mean square sense as the signal-to-noise ratio of the image decreases. However, it is shown that this image-domain noise decrease does not noticeably improve identification of image features.
The use of information about an image in addition to measured data has been demonstrated to provide the possibility of decreasing the noise in the measured data. A new constraint, recently proposed, is that of perfect knowledge of part of an image. These results are generalized, and the usefulness of this new constraint in decreasing noise outside the region of prior knowledge is shown to be a function of the measured data noise-correlation properties. In particular, it is shown that prior high-quality knowledge is a generalization of support constraints.
The Air Force Phillips Laboratory is developing a coherent laser radar system to upgrade its space surveillance capabilities. Because of the short pulse length of this laser system, range resolved information can be obtained. This range information can be used to reconstruct images by reflective tomography. This paper presents results of simulations using four different transmission tomography algorithms to reconstruct images from reflective tomography data. The transmission tomography problem formulation is stated, a description of reflective tomography is given, and results of the simulations are presented.
The use of information about an image in addition to measured data has been demonstrated to provide the possibility of decreasing the noise in the measured data. A new constraint, recently proposed, is that of perfect knowledge of part of an image. In this paper, these results are generalized and the usefulness of this new constraint to decrease noise outside the region of prior knowledge is shown to be a function of the measured data noise-correlation properties. In particular, it is shown that prior high-quality knowledge is a generalization of support constraints.
The Air Force Phillips Laboratory is upgrading the surveillance capabilities of its AMOS facility with a coherent laser radar system. A notable feature of this laser radar system is its short (approximately equals 1 ns) pulselength which allows high resolution range data to be obtained. The usefulness of this range data for use in reflective tomographic reconstructions of images of space objects is discussed in this paper. A brief review of tomography is given. Then the capability of the laser radar system to provide adequate range-resolved data is analyzed, both in terms of system parameters and signal-to-noise issues. Sample image reconstructions are presented and discussed.
KEYWORDS: Reconstruction algorithms, Super resolution, Telescopes, Adaptive optics, Deconvolution, Denoising, Computer simulations, Satellites, Signal to noise ratio, Space telescopes
The use of support constraints for noise reduction in images obtained with telescopes that use adaptive optics for atmospheric correction is discussed in paper. The effectiveness of support constraints in achieving noise reduction is discussed in terms of the noise properties and in terms of the type of algorithms used to enforce the support constraints. Both a convex projections and a cost function minimization algorithm are used to enforce the support constraints, and it is shown via computer simulation and field data that the cost function algorithm results in artifacts in the reconstructions, in general. The convex projections algorithms produced mean square error decreases in the image domain of approximately 10% for high light levels, but essentially no error decreases for low light levels.
The use of support constraints for improving the quality of Fourier spectra, their associated images, and the relationship between the two domains is discussed in this paper. Theoretical relationships are derived which predict the noise reduction in both the image domain and the Fourier domain achieved by single and repeated application of support constraints for the case of wide sense stationary Fourier domain noise. It is shown that the application of support constraints can increase noise inside the support constraint if the application is not done correctly. An iterative algorithm is proposed which enforces support constraints in such a way that noise is never increased inside the support constraint and the algorithm achieves the minimum possible noise in a finite number of steps.
Speckle imaging is a statistical technique for achieving near-diffraction-limited imagery of astronomical objects with ground-based telescopes. The performance of this statistical postdetection processing technique is critically dependent on the signal-to-noise ratio (SNR) of the estimators used for various average spectra, which can be a strong function of detector characteristics. We discuss techniques for maximizing SNR under low-light conditions where so-called "read noise" becomes a factor in CCD detectors, and we derive an optimal exposure time for CCD detection when total viewing time limits the SNR. We also show that a properly optimized CCD can outperform a shot-noise-limited detector, in terms of the SNR, at much lower light levels than without optimization.
The use of support constraints for improving the quality of Fourier spectra estimates is discussed in this paper. It is shown that superresolution is an additive phenomena which is a function of the correlation scale induced by the support constraint and is independent of the bandwidth of the measured Fourier spectrum. It is also shown for power spectra that support constraints, due to the enforced correlation of power spectra, reduce the variance of measured power spectra. These theoretical results are validated via computer simulation in the area of speckle interferometry, with very good agreement shown between theory and simulation.
We compare the performance of three parallel supercomputers executing a bispectrum estimation code used to remove distortions from astronomical data. We discuss the issues in parallelizing the code on an 8-processor shared-memory CRAY Y-MP and a 1024-processor distributed-memory nCUBE machine. Results show that elapsed times on the nCUBE machine are comparable to those on the CRAY Y-MP. Execution of the nCUBE was more than 40 times faster than that of a single processor CRAY-2 resulting in more than 50 times better cost performance. Cost performance on the nCUBE is more than 25 times better than an 8- processor CRAY Y-MP.
Predetection compensation combined with post detection image processing for the case of
imaging through atmospheric turbulence is addressed. Full and partial predetection compensation using
adaptive optics is combined with bispectrum speckle imaging post-processing, and performance
improvements are assessed. Full compensation was found to provide a large improvement in the signalto-
noise ratio (SNR) of the power spectrum estimate compared to the uncompensated case. Lower
degrees of correction provided smaller improvements in the power spectrum SNR, and a very low degree
of compensation provided results indistinguishable from the uncompensated case. Three regions of
performance improvement were found with respect to the object Fourier phase spectrum estimate: 1) the
fully compensated case, where bispectrum post processing provided no improvement in the phase
estimate over that obtained from a fully compensated long exposure image; 2) a partially compensated
regime, where applying bispectrum post processing to the compensated images provided phase
spectrum estimation superior to the uncompensated bispectrum case; and 3) a very poorly compensated
regime, where the results were essentially indistinguishable from the uncompensated case. Previously
validated simulation codes were used to conduct this investigation.
Several algorithms based upon a weighted least squares methodology are presented for phase reconstruction from the bispectrum. Results from applying these algorithms to both simulated and field data are presented and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.