In this paper, we propose a lightweight approach to address the problems of noise amplification, detail loss, and edge blurring encountered in low-light image enhancement, which typically hinder subsequent computer vision tasks. Our proposed method utilizes transformers and depth-separated convolution, integrates the ISP image processing pipeline, and incorporates semantic information. First, we introduce an enhancement compensation extraction module that utilizes a transformer-based two-branch structure. Notably, depth-separated convolution replaces the original multi-attention transformer to achieve a lightweight design. The local branch estimates pixel-level illumination defects in low-light images, and the global branch enhances global structural information. Subsequently, we design a progressive enhancement module to receive enhancement compensation and reconstruct the enhanced image using the ISP pipeline. Together, these two modules form an enhancement network. Finally, we design a VGG16-based semantic segmentation module to preserve semantic information during the enhancement process and complete the image enhancement. Evaluations on benchmark datasets and extensive experiments with other algorithms demonstrate the effectiveness of our proposed low-light image enhancement method. The reconstructed enhanced images are improved in terms of brightness, contrast and detail sharpness, while effectively mitigating the problems associated with noise amplification and edge blurring.
|