PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Ground object information extraction is the key to remote sensing image applications. High-resolution remote sensing images contain complex feature information. However, traditional feature information extraction methods have certain accuracy limitations, while deep learning techniques largely make up for the shortcomings of traditional methods. Aimed at the slow speed and inaccurate boundary region segmentation in remote sensing image feature extraction by the DeepLabv3+ model, this paper introduces an attention mechanism, embedding the spatial and channel attention mechanism modules in the feature extraction network. The combined model was tested on the ISPRS remote sensing dataset and achieved 78.68% accuracy. The results show that the proposed network structure is capable of generalization and is feasible in ground object information extraction from high-resolution remote sensing images.
Wenbo Su,Shusong Huang,WenPing Qi,Yuhao Wang, andWei Yi
"Extracting ground object information from remote sensing images based on DeepLabv3+ model integrating dual attention mechanism", Proc. SPIE 11848, International Conference on Signal Image Processing and Communication (ICSIPC 2021), 118480P (1 June 2021); https://doi.org/10.1117/12.2600380
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Wenbo Su, Shusong Huang, WenPing Qi, Yuhao Wang, Wei Yi, "Extracting ground object information from remote sensing images based on DeepLabv3+ model integrating dual attention mechanism," Proc. SPIE 11848, International Conference on Signal Image Processing and Communication (ICSIPC 2021), 118480P (1 June 2021); https://doi.org/10.1117/12.2600380