In the digital image defog algorithm, restoring a fog-free image relies on calculating transmittance and atmospheric light intensity. However, existing algorithms with division calculations are unstable in low-transmittance scenarios, amplifying noise and degrading image quality. Estimating atmospheric light direction and mode inaccurately also affects the final image. To address these issues, this paper proposes an iterative algorithm. It transforms the imaging model to avoid division calculations and selects the top 0.1% brightest pixels to determine atmospheric light intensity accurately. Experimental results show significant improvements in color fidelity, image clarity, and visual effect. The algorithm achieves 5% increase in color reduction, 13% increase in average gradient, and 16% increase in dark channel prior ratio.
The existence of phase error influences the imaging performance of optical synthetic aperture system and limits the resolution of the imaging system. The relationship between point spread function of binocular system and piston error is analyzed and the conclusion that the relation between point spread function and piston error is irrational with coordinates along the perpendicular bisector of baseline has been deduced and proved. Therefore the effect of position accuracy to error detection is avoided. The distribution characteristic of point spread function on baseline determines the range of piston error and by which a unique value of piston error is obtained. Finally, the piston error detection method of binocular system is extended to the common optical synthetic aperture system. Simulation experiment demonstrates that this method can conquer effect of various noises and the detection accuracy is better than 0.004 times of wavelength.
Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.
Abstract—Image matching is the core research topics of digital photogrammetry and computer vision. SIFT(Scale-Invariant
Feature Transform) algorithm is a feature matching algorithm based on local invariant features which is proposed by Lowe at
1999, SIFT features are invariant to image rotation and scaling, even partially invariant to change in 3D camera viewpoint
and illumination. They are well localized in both the spatial and frequency domains, reducing the probability of disruption by
occlusion, clutter, or noise. So the algorithm has a widely used in image matching and 3D reconstruction based on stereo
image. Traditional SIFT algorithm's implementation and optimization are generally for CPU. Due to the large numbers of
extracted features(even if only several objects can also extract large numbers of SIFT feature), high-dimensional of the feature
vector(usually a 128-dimensional SIFT feature vector), and the complexity for the SIFT algorithm, therefore the SIFT
algorithm on the CPU processing speed is slow, hard to fulfil the real-time requirements. Programmable Graphic Process
United(PGPU) is commonly used by the current computer graphics as a dedicated device for image processing. The
development experience of recent years shows that a high-performance GPU, which can be achieved 10 times single-precision
floating-point processing performanceone compared with the same time of a high-performance desktop CPU, simultaneity the
GPU's memory bandwidth is up to five times compared with the same period desktop platform. Provide the same computing
power, the GPU's cost and power consumption should be less than the CPU-based system. At the same time, due to the parallel
nature of graphics rendering and image processing, so GPU-accelerated image processing become to an efficient solution for
some algorithm which have requirements for real-time. In this paper, we realized the algorithm by OpenGL shader language
and compare to the results which realized by CPU. Experiments demonstrate that the efficiency of GPU-based SIFT algorithm
are significantly improved.
Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high
altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing
with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the
modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be
post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML)
algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the
point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is
unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these
limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which
incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during
iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that
the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the
average contrast evaluation indexes.
A modified multiframe image restoration algorithm of degraded images is described in
this paper, which is based on the method developed by V. Katkovnik. The projection gradient
algorithm based on anisotropic LPA-ICI filtering, being proposed by V. Katkovnik, could only
restore images contaminated by Gaussian noisy, also it was too complicated and time consuming.
By improving Katkovnik's cost function and applying constraints on image intensity value in
iteration, we reached a new multiframe recursive iteration restoration scheme in frequency domain.
This method is suitable for reconstruction of images degenerated by both Gaussian noise and
Poissonian noise, as well by mixed noise. Experimental results demonstrated that this method
works efficiently, and could well restore images blurred heavily by multi-noise.
The performance of high-resolution imaging with large optical instruments is severely limited by atmospheric
turbulence. Adaptive optics (AO) offers a real-time compensation for turbulence. However, the correction is often only
partial, and image restoration is required for reaching or nearing to the diffraction limit. In this paper, we consider a
hybrid Curvelet-Fourier regularized deconvolution (HCFRD) scheme for use in image restoration problems. The
HCFRD algorithm performs noise regularization via scalar shrinkage in both the Fourier and Curvelet domains. The
Fourier shrinkage exploits the Fourier transform's economical representation of the colored noise inherent in
deconvolution, whereas the curvelet shrinkage exploits the curvelet domain's economical representation of piecewise
smooth signals and images. We derive the optimal balance between the amount of Fourier and Curvelet regularization by
optimizing an approximate mean-squared error (MSE) metric and find that signals with more economical curvelet
representations require less Fourier shrinkage. HCFRD is applicable to all ill-conditioned deconvolution problems, its
estimate features minimal ringing, unlike the purely Fourier-based Wiener deconvolution. Experimental results prove
that HCFRD outperforms the Wiener filter and ForWaRD algorithm in terms of both visual quality and SNR
performance.
The observed object images are seriously blurred because of the influence of atmospheric turbulence. The deconvolution
is required for object reconstruction from turbulence degraded images. The wavelet transform provides a multiresolution
approach to image analysis and processing. We consider a wavelet-based adaptive edge-preserving
regularization deconvolution (WbARD) scheme for image restoration problems. This is accomplished by first casting the
classical image restoration problem into the wavelet domain. We consider the behavior of the blur operator in the atrous
wavelet domain. Then, we are able to adapt quite easily to scale-varying and orientation-varying features in the image
while simultaneously retaining the edge preservation properties of the regularization. Experimental results show that the
WbARD algorithm produces good performance in comparison to standard direct restoration approaches for turbulencedegraded
images.
In spatial remote-sensing observation to the earth, the optical system aperture of satellite is becoming larger and larger.
But, the larger aperture lead to the more limits constrained by manufacture costs and system loading. Optical sparse
aperture imaging systems are composed of several smaller sub-apertures which are arrayed in some rules. But, because
of the aperture of optical sparse aperture imaging system is just partial filling for the equivalent single large one, the
system point spread function could be a certain spread and the response to middle-and-low spatial frequency is reduced.
Consequently, the resolution of obtained images is blurred. So, the obtained images should be restored to improve image
resolution. This paper takes advantage of an optical sparse aperture system established in the laboratory, imaging on an
aviation remote-sensing negative, and then obtain the simulated optical sparse aperture remote-sensing images.
Resolution of simulated sparse aperture remote-sensing images is greatly increased through the method presented by this
paper, which incorporates the Laplacian factor in the increment Wiener filter. Through this image restoration process, the
limitation of system itself is well compensated. Experiment results show that algorithm of this paper is proper to many
kind of sparse system with different array structure and filling factor, the simulated remote-sensing images resolution and
SNR (Signal and Noise Ratio) could be improved greatly.
The performance of high-resolution imaging with large optical instruments is severely limited by atmospheric
turbulence. Image deconvolution such as iterative blind deconvolution (IBD) and Richardson-Lucy (RL) deconvolution
are required. The IBD method involves the imposition of constraints such as conservation of energy, positivity, and finite
support, with known size, alternately on the image and the PSF in the spatial and Fourier domains, until convergence.
The iterative RL solution converges to the maximum likelihood solution for Poisson statistics in the data. Properties of
the RL algorithm which make it well-suited for IBD are energy conservation and the sustenance of nonnegativity. So, RL
was incorporated into the IBD framework. In this paper, an enhanced Richardson-Lucy-based iterative blind
deconvolution (ERL-IBD) algorithm is proposed to restore the blurred images due to atmospheric turbulence. The ERLIBD
incorporates dynamic PSF support estimation, bandwidth constraint of optical system, and the asymmetry factor
update. The experimental results demonstrate that the ERL-IBD algorithm works better than IBD algorithm in
deconvolution of the blurred-turbulence image.
The performance of high-resolution imaging with large optical instruments is severely limited by atmospheric turbulence,
and an image deconvolution is required for reaching the diffraction limit. A new astronomical image deconvolution
algorithm is proposed, which incorporates dynamic support region and improved cost function to NAS-RIF algorithm.
The enhanced NAS-RIF (ENAS-RIF) method takes into account the noise in the image and can dynamically shrink
support region (SR) in application. In restoration process, initial SR is set to approximate counter of the true object, and
then SR automatically contracts with iteration going. The approximate counter of interested object is detected by means
of beamlet transform detecting edge. The ENAS-RIF algorithm is applied to the restorations of in-door Laser point
source and long exposure extended object images. The experimental results demonstrate that the ENAS-RIF algorithm
works better than classical NAS-RIF algorithm in deconvolution of the degraded image with low SNR and convergence
speed is faster.
The atmospheric turbulence severely limits the angular resolution of ground based telescopes. When using Adaptive
Optics (AO) compensation, the wavefront sensor data permit the estimation of the residual PSF. Yet, this estimation is
imperfect, and a deconvolution is required for reaching the diffraction limit. A joint deconvolution method based on
power spectra density (PSD) for AO image is presented. It deduces from a Bayesian framework in the context of imaging
through turbulence with adaptive optics. This method uses a noise model that accounts for photonic and detector noises.
It incorporates a positivity constraint and some a priori knowledge of the object (an estimate of its local mean and a
model for its power spectral density). Finally, it reckons with an imperfect knowledge of the point spread function (PSF)
by estimating the PSF jointly with the object under soft constraints rather than blindly. These constraints are designed to
embody our knowledge of the PSF. Deconvolution results are presented for both simulated and experimental data.
The atmospheric turbulence severely limits the angular resolution of ground based telescopes. When using Adaptive
Optics (AO) compensation, the wavefront sensor data permit the estimation of the residual PSF. Yet, this estimation is
imperfect, and a deconvolution is required for reaching the diffraction limit. It is a powerful and low-cost high-resolution
imaging technique designed to compensate for the image degradation due to atmospheric turbulence. A joint
deconvolution method based on slope measurements for AO image is presented. It deduces from a Bayesian framework
in the context of imaging through turbulence with adaptive optics. It takes into account the noise in the images and in the
Hartmann-Shack wavefront sensor measurements and the available a priori information on the object to be restored as
well as on the wave fronts. Deconvolution results are presented for experimental data.
Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.
KEYWORDS: Digital signal processing, Wavelets, Signal processing, Wavelet transforms, Linear filtering, Curium, Electronic filtering, Data processing, Computer programming, Signal to noise ratio
Distortions between decompressed image and original remote sensing image can be divided into two parts, geometric degradation and radiometric degradation. The geometric degradation is the error or difference of pixel's position or geometric structure compared with these in original one. Measuring this geometric distortion is very important in some area, such as digital photogrammetry and computer vision, because it is this distortion that influences the position of object. This paper provides a modified method for measuring this distortion on the basis of modified least square (MLS) image match.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.