Image inpainting is promising but challenging in computer vision tasks; it aims to fill in missing regions of corrupted images with semantically sensible content. By utilizing generative adversarial networks (GAN), state-of-the-art methods have achieved great improvements, but the ordinary GAN generally suffers from difficulties in training and unstable gradients, leading to unsatisfactory inpainting results. Image-level predictive filtering is a widely used restoration method that adaptively predicts the weights of pixels around a target pixel and then linearly combines these pixels to generate the image, but it cannot fill larger missing regions. Thus, we extend image-level predictive filtering to the deep feature level through an encoder–decoder network and embed adaptive channel attention and spatial attention modules in the encoder network. We use Wasserstein GAN instead of normal GAN due to its superior properties and then combine it with image-level predictive filtering and deep feature-level predictive filtering, which ultimately leads to a significant improvement in image inpainting. We validate our method on two public datasets: CelebA-HQ and Places2. Our method demonstrates good performance across four metrics: peak signal-to-noise ratio, L1, structural similarity index measure, and learned perceptual image patch similarity. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image filtering
Tunable filters
Image restoration
Gallium nitride
Education and training
Semantics
Feature extraction