Multi-label scene classification (MLSC) in remote sensing (RS) plays a crucial role in recognizing the intricate contents of RS images. However, the MLSC task is challenged by the complexity of label combinations and the diversity of visual content. Multispectral (MS) data provide valuable information for scene classification. To achieve accurate classification results, it is important to fully utilize the diverse information contained in MS data. To this end, we propose a multispectral transformer network (MST-Net), which leverages transformer-based architectures to capture the diverse information within MS images and the complex relationships between labels and features. Specifically, MST-Net consists of a feature fusion encoder and a semantic query decoder. Within the encoder, we have developed an MS deformable attention module based on a sampling strategy that reduces focus on redundant spectral areas, allowing for better integration of complementary MS information. In the decoder, geographic information is introduced as an inductive bias, utilizing the unique spatiotemporal characteristics of RS images to learn better class-related features. Extensive experiments were conducted on two RS multi-label datasets, namely, LSCIDMRv2 and BigEarthNet. Comparisons with several state-of-the-art multi-label classification methods demonstrate the effectiveness and superiority of MST-Net. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Transformers
Remote sensing
Semantics
Feature extraction
Scene classification
Visualization
Education and training