The distortion caused by turbulence in the atmosphere during long range imaging can result in low quality images and videos. This, in turn, greatly increases the difficulty of any post acquisition tasks such as tracking or classification. The mitigation of such distortions is therefore important, allowing any post processing steps to be performed successfully. We make use of the EDVR network, initially designed for video restoration and super resolution, to mitigate the effects of turbulence. This paper presents two modifications to the training and architecture of EDVR, that improve its applicability to turbulence mitigation: namely the replacement of the deformable convolution layers present in the original EDVR architecture, alongside the addition of perceptual loss. This paper also presents an analysis of common metrics used for image quality assessment and it evaluates their suitability for the comparison of turbulence mitigation approaches. In this context, traditional metrics such as Peak Signal-to-Noise Ratio can be misleading, as they could reward undesirable attributes, such as increased contrast instead of high frequency detail. We argue that the applications for which turbulence mitigated imagery is used should be the real markers of quality for any turbulence mitigation technique. To aid in this, we also present a new turbulence classification dataset that can be used to measure the classification performance before and after turbulence mitigation.
In long range imagery, the atmosphere along the line of sight can result in unwanted visual effects. Random variations in the refractive index of the air causes light to shift and distort. When captured by a camera, this randomly induced variation results in blurred and spatially distorted images. The removal of such effects is greatly desired. Many traditional methods are able to reduce the effects of turbulence within images, however they require complex optimisation procedures or have large computational complexity. The use of deep learning for image processing has now become commonplace, with neural networks being able to outperform traditional methods in many fields. This paper presents an evaluation of various deep learning architectures on the task of turbulence mitigation. The core disadvantage of deep learning is the dependence on a large quantity of relevant data. For the task of turbulence mitigation, real life data is difficult to obtain, as a clean undistorted image is not always obtainable. Turbulent images were therefore generated with the use of a turbulence simulator. This was able to accurately represent atmospheric conditions and apply the resulting spatial distortions onto clean images. This paper provides a comparison between current state of the art image reconstruction convolutional neural networks. Each network is trained on simulated turbulence data. They are then assessed on a series of test images. It is shown that the networks are unable to provide high quality output images. However, they are shown to be able to reduce the effects of spatial warping within the test images. This paper provides critical analysis into the effectiveness of the application of deep learning. It is shown that deep learning has potential in this field, and can be used to make further improvements in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.