In low-light conditions, object detection algorithms suffer from reduced accuracy due to factors such as noise and insufficient information. Current solutions often involve a two-stage process: first, improving image illumination and then performing object detection. However, this method has limitations as these networks work independently. To address this, we propose a parallel object detection algorithm for low-light environments. Our approach simultaneously encodes image features using both an illumination enhancement network and an object detection network. This innovative design allows these networks to adapt to each other, improving feature adaptability for object detection. We enhance adaptive learning efficiency by introducing a novel mutual feedback mechanism, which dynamically adjusts the learning weights of the two networks, thereby enhancing the network’s capacity to encode object-related information in low-light conditions. Experiments were conducted on both real-world and synthetic datasets. On the real-world dataset, the proposed method outperformed the original object detection network, achieving improvements of 4.76% in mAP@0.5, 12.12% in mAP@0.5:0.95, and 8.4% in |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Object detection
Image enhancement
Light sources and illumination
Detection and tracking algorithms
Education and training
Image processing
Data modeling