Target extraction is one of the important aspects in remote sensing image analysis and processing, which has wide applications in images compression, target tracking, target recognition and change detection. Among different targets, airport has attracted more and more attention due to its significance in military and civilian. In this paper, we propose a novel and reliable airport object extraction model combining visual attention mechanism and parallel line detection algorithm. First, a novel saliency analysis model for remote sensing images with airport region is proposed to complete statistical saliency feature analysis. The proposed model can precisely extract the most salient region and preferably suppress the background interference. Then, the prior geometric knowledge is analyzed and airport runways contained two parallel lines with similar length are detected efficiently. Finally, we use the improved Otsu threshold segmentation method to segment and extract the airport regions from the salient map of remote sensing images. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the detection of the airport.
The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.
Region of Interest (ROI) extraction is an important component in remote sensing images processing, which is useful for further practical applications such as image compression, image fusion, image segmentation and image registration. Traditional ROI extraction methods are usually prior knowledge-based and depend on a global searching solution which are time consuming and computational complex. Saliency detection which is widely used for ROI extraction from natural scene images in these years can effectively solve the problem of high computation complexity in ROI extraction for remote sensing images as well as retain accuracy. In this paper, a new computational model is proposed to improve the accuracy of ROI extraction in remote sensing images. Considering the characteristics of remote sensing images, we first use lifting wavelet transform based on adaptive direction evaluation (ADE) to obtain multi-scale orientation contrast feature map (MF). Secondly, the features of color are exploited using the information content analysis to provide a color information map (CIM). Thirdly, feature fusion is used to integrate multi-scale orientation contrast features and color information for generating a saliency map. Finally, an adaptive threshold segmentation algorithm is employed to obtain the ROI. Compared with existing models, our method can not only effectively extract detail of the ROIs, but also effectively remove mistaken detection of the inner parts of the ROIs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.