Remote Sensing Applications and Decision Support

Terrain classification of polarimetric synthetic aperture radar imagery based on polarimetric features and ensemble learning

[+] Author Affiliations
Chuanbo Huang

Southwest University of Science and Technology, School of National Defense Science and Technology, Mianyang City, Sichuan Province, China

J. Appl. Remote Sens. 11(2), 026002 (Apr 03, 2017). doi:10.1117/1.JRS.11.026002
History: Received December 7, 2016; Accepted March 17, 2017
Text Size: A A A

Open Access Open Access

Abstract.  An evolutionary classification system for terrain classification of polarimetric synthetic aperture radar (PolSAR) imagery based on ensemble learning with polarimetric and texture features is proposed. Polarimetric measurements cannot produce sufficient identification information for PolSAR terrain classification in some complex areas. To address this issue, texture features have been successfully used in image segmentation. The system classification feature has been adopted using a combination of Pauli features and the last principal component of Gabor texture-feature dimensionality reduction. The resulting feature combination assigned through experimental analysis is very suitable for describing structural and spatial information. To obtain a good integration effect, the basic classifier should be as precise as possible and the differences among the features should be as distinct as possible. We therefore examine and construct an ensemble-weighted voting classifier, including two support vector machine models that are constructed using kernel functions of the radial basis and sigmoid, extreme learning machine, k-nearest neighbor, and discriminant analysis classifier, which can avoid redundancy and bias because of different theoretical backgrounds. An experiment was performed to estimate the proposed algorithm’s performance. The results verified that the algorithm can obtain better accuracy than the four classifiers mentioned in this paper.

Figures in this Article

Polarimetric synthetic aperture radar (PolSAR) has emerged through the evolution of airborne and satellite remote sensing.16 PolSAR application requirements have become increasingly important because the mismatch between the vast amount of remote sensing data and inadequate information processing capabilities has become significant. PolSAR data contain information on various scattering mechanisms of different terrain structures and materials.714 Currently, PolSAR is used as a type of radar that can provide terrain classification information.

In recent years, PolSAR research in remote sensing has become widespread because a greater amount of information on scattering objects is found in PolSAR data than in single polarization and double polarization data.15,16 PolSAR data depict the state changes of microwave polarization, which is produced by the dielectric constant and terrain structure. Therefore, terrain features can be obtained from the PolSAR data.1 Moreover, terrain classification of PolSAR imagery is considered a typical PolSAR application, which employs features and classifiers to separate different terrain types.

Currently, PolSAR image classification has been successfully applied to many practical problems. Related information processing techniques are being continuously developed, such as image filtering, feature extraction, and classification algorithms. PolSAR can measure each observation terrain using a full scattering matrix. Its feature distribution complexity often results in scattered signals of different terrains with similar features, which increases the difficulty of discriminating information extraction. Many algorithms have been proposed4,5 for polarimetric data analyses, such as eigenvalue analysis, polarization decomposition, and so on.

Feature extraction and the classifier structure are two important factors that affect the final classification result. Pixel-based PolSAR terrain classification specifies the category of each pixel in the feature space. These features indicate the pixel characteristics and can be Pauli features, grayscales, textures, and so on. They must be extracted and used as input for the terrain discrimination. Different objects have different surface features. Therefore, the PolSAR image textural features have location discontinuities in the properties of adjacent image regions. Thus, they can be considered features for classification. Because the texture features contain rich identification information, they are important for classification.17

Several approaches exist for extracting the texture features, such as the gray-level co-occurrence matrix (GLCM), wavelets, Gabor filters, and local binary patterns (LBPs). Furthermore, textural features have been utilized for object detection and terrain classification.18,19 An efficient use of the Gabor filter was demonstrated by Leone and Distante20 for terrain detection in a textured scene. The LBPs are utilized to evaluate the similarity of neighboring image regions.21 Nonetheless, GLCM remains a more popular method.22 More effective terrain classification performance can be achieved by combining different textures, and the resulting discrimination performance is significantly improved.

Many classification methods exist, such as the classical maximum likelihood,23 neural networks,24 and the decision tree methods.25 In synthetic aperture radar (SAR) image classification and segmentation, the statistical distribution model has been commonly used. Among these models, the Markov random field (MRF)26 is an efficient approach for image classification. However, in addressing nonstationary SAR images, MRF has difficulty obtaining improved results. Meanwhile, the support vector machine (SVM)2729 is a supervised method based on the pixel level. As a discriminative model, SVM directly employs training sample modeling based on the structural risk minimization criterion. It can thereby handle the linear inseparability problem by introducing the kernel function. Moreover, it does not have local minima because it solves the convex quadratic programming problem. Nonetheless, it does not consider spatial information.

The classifier of the naive Bayes method is a probabilistic classifier based on the independence assumption of Bayes’ theorem. Meanwhile, the k-nearest neighbor (KNN) algorithm, proposed by Cover and Hart,30 has high precision and is not sensitive to outliers. In addition, discriminant analysis is a renowned statistical recognition method that has been successfully applied to category prediction.31 Recently, neural networks have been introduced for image classification on account of their ability to approximate nonlinear functions in the input space.32,33 Moreover, the extreme learning machine (ELM)34 is a feedforward neural network with a single hidden layer. It has strong generalization performance and a very fast classification speed.

With consideration of the above approaches, we herein present and discuss terrain classification of PolSAR imagery based on ensemble learning with Pauli and Gabor features. The main contributions of this paper are the following. Polarimetric and texture features are combined for PolSAR image classification. We utilize an efficient method for PolSAR image feature representation. Our method adopts an ensemble learning strategy and selects five base classifiers to form an ensemble classifier. These classifiers are not only more accurate but also distinct from each other. The weight of each base classifier is obtained by a differential evolution (DE) algorithm.

The remainder of this paper is organized as follows. Pixel feature representation is introduced in Sec. 2. In Sec. 3, the classification system construction is presented. We provide and analyze the simulation results of the experiment in Sec. 4. In Sec. 5, our conclusions and future work are discussed.

Image segmentation requires the assignment of categories to all pixels in an image. Therefore, accurate pixel feature extraction is very important for image segmentation.

Texture Features

The image texture is determined based on the differences in spatial distributions in the image, specifically the frequency and intensity of each pixel. In this study, we examine the use of enhanced discriminatory information to improve classification accuracy. We therefore introduce texture features for classification. The Gabor transform has been widely applied in image processing. The texture features are derived by convolving the image and Gabor filter banks. A set of two-dimensional (2-D) Gabor filters is employed to convolve the image. A 2-D Gabor function is formed by a sinusoidal plane wave with a certain frequency and direction. It is modulated by a 2-D Gaussian function.35 In the spatial domain, the Gabor filter is formulated as Display Formula

g(x,y|λ,θ,σ,ψ,γ)=exp(x2+γ2y22σ2)cos(2πxλ+ψ),(1)
where γ represents the spatial aspect ratio and specifies the elliptical support of the Gabor function. In addition, ψ represents the phase shift in degrees; σ represents the Gaussian standard deviation, which specifies the receptive field size; θ denotes the normal direction of the Gabor function parallel stripes; and λ is the cosine factor wavelength. Moreover, y=ycos(θ)xsin(θ), x=xcos(θ)+ysin(θ), and f=1/λ represent the cosine factor frequency in the spatial domain.

The ratio σ/λ specifies the bandwidth of the spatial frequency of simple cells. The bandwidth, b, (in octaves) of the half-response spatial frequency and ratio σ/λ are given as follows:36Display Formula

b=log2σλπ+ln22σλπln22,(2)
Display Formula
σλ=1πln222b+12b1,(3)
where ψ=90  deg and ψ=0  deg, respectively, return the imaginary part and real part of the Gabor filter. The real part is equivalent to an even symmetric filter; therefore, we use the real part of the Gabor filter. Orientation parameter θ is, respectively, assigned as 0, 30, 60, 90, 120, and 150 deg. The frequency values can be described as follows: Display Formula
FL(i)=0.252i0.5Nc,(4)
Display Formula
FH(i)=0.25+2i0.5Nc,(5)
where i{1,2,,log2(Nc/8)}, and Nc is the number of image columns. Note that 0.25FH(i)<0.5 and 0<FL(i)<0.25. Bandwidth b of the Gabor filter is set to one octave in this study.

The Gaussian smoothing function is used to perform feature extraction. It is defined as follows: Display Formula

g(x,y)=exp{x2+y22σ2}.(6)

Note that we apply a bicubic interpolation method to restore the image size. Finally, we obtain the 96-dimensional Gabor feature.

Principal component analysis (PCA) is an important method of identifying data patterns. PCA can reduce the dimensions without losing excessive information. It is thus often applied to image compression. We therefore employed PCA to reduce the Gabor feature dimensions to five. Figures 1(a)1(f) show the original grayscale imagery, and the first, third, second, fourth, and fifth principal component imagery, respectively, which were obtained by the PCA reduction of the Gabor feature dimensions.

Graphic Jump Location
Fig. 1
F1 :

Original grayscale image, obtained from the red/green/blue (RGB) composites of Pauli decomposition, and five principal component images: (a) original grayscale image; (b) first principal component image; (c) third principal component image; (d) second principal component image; (e) fourth principal component image; and (f) fifth principal component image.

Polarimetric Features

The classification results of polarimetric feature extraction by the Pauli decomposition method are more stable and effective in polarimetric decomposition methods. Therefore, Pauli features were utilized to extract information on the terrain scattering mechanism. Analysis of PolSAR image data is generally based on a matrix. The complex scattering matrix corresponding to the scattering terrain can be measured by PolSAR with different polarizations.7 Four combinations exist for reception and transmission polarizations, namely, horizontal–horizontal (HH), horizontal–vertical (HV), vertical–vertical (VV), and vertical–horizontal (VH). Scattering matrix S characterizes the relations between the polarization states of the microwaves scattered and those received by the terrains. The general form of the scattering process can be given by Display Formula

[EHrEVr]=S[EHtEVt]=[SHHSHVSVHSVV][EHtEVt],(7)
where EH,Vr and EH,Vt, respectively, denote the received and transmitted electric fields corresponding to the polarization types. The matrix elements are obtained by SHV=|SHV|exp(iϕHV).

In Eq. (7), the parameters |SHH|, |SHV|, |SVV| and ØHV, ØVV are, respectively, the amplitudes and the phases. PolSAR can obtain the information of the scattering terrains from the five parameters.37 The formula eigenvalue analysis forms are generally provided based on coherency matrix T3. The coherency matrix is represented by the Pauli feature vector as follows: Display Formula

T3=KP·KP*T.(8)

The Pauli feature is used in this study. The Pauli feature vector is given by Display Formula

KP=12[SHH+SVVSHHSVV2SHV],(9)
where 2SHV, SHH+SVV, and SHHSVV, respectively, describe the volume scattering, odd reflection, and even reflection. Figure 2 shows the results of the Pauli decomposition of the PolSAR image data from San Francisco Bay, California.

Graphic Jump Location
Fig. 2
F2 :

Pauli decomposition map and its pseudocolor composite image: (a) SHH+SVV, (b) SHHSVV, (c) 2SHV, and (d) pseudocolor composite.

With consideration of the correlations between these descriptors and their ability to represent actual terrain imagery, we chose a reasonable combination of features through experiments for terrain classification.

The primary goal of constructing the ensemble classifier is to increase the classifier capability by combining the superior performance of the various base learners. In general, to obtain effective integration, the base classifiers should be as precise and distinct as possible. This requirement was shown by Krogh and Vedelsby.38 In this study, to construct an effective classifier, we explore and construct an ensemble-weighted voting classifier, including two SVMs, which are formed by the two kernel functions of radial basis function (RBF) and sigmoid, as well as ELM, KNN, and the discriminant analysis classifier. These classifiers have lower computational complexity and different theoretical backgrounds and can avoid redundancies and biases.

SVM is a supervised learning model with an associated learning method that analyzes the classification data. The SVM algorithm first establishes a model by training samples. Using this model, the new test samples are classified. It is a nonprobabilistic classifier. Intuitively, a good classification result of the SVM classifier is realized by the optimal separating hyperplane. In general, the larger the margin is, the lower the classification error rate is. SVM can efficiently find nonlinear solutions using the “kernel trick.” Then different types of SVM classifiers are formed by various kernel functions, including RBF, sigmoid, linear, and polynomial functions.

An effective learning method, ELM, is a feedforward neural network with one hidden layer. In this method, the weights of the hidden layer nodes are stochastically selected. The weights of the output layer are slated by the least-squares method. ELM can be derived from randomly generated hidden neurons and independent training data. It offers significant advantages over conventional neural network learning algorithms, including ease of implementation, a fast learning speed, and minimal need for human intervention. The pseudocode for ELM is given in Table 1. A more detailed introduction to ELM can be found in Ref. 34.

Table Grahic Jump Location
Table 1Algorithmic steps for ELM.

Meanwhile, KNN is an instance-based learning algorithm. The output is a member of the class in the KNN classification. A sample is classified according to the majority of votes of its adjacent samples, with the sample being specified in the category of its KNN samples. In addition, discriminant analysis is a method for learning identification features. It has been applied to predict the category to which a subject belongs. There are two basic steps in discriminant analysis. The first step is to estimate the weight factors, which can be used to characterize the attributes of the known samples and compute the measurements of their trends. In the second step, information is applied to create the decision-making rule that ensures some threshold for forecasting.

We designed the presented classification model based on the requirements of PolSAR terrain classification. We identified the favorable weights of all base classifiers using the approaches presented in Ref. 39. The main steps are given in Table 2.

Table Grahic Jump Location
Table 2Algorithmic steps for DE optimization.

The scheme of the classification method is illustrated in Fig. 3. The implementing procedure of this methodology is described as follows.

  • Extract textural features based on the Gabor transform and obtain Pauli features from the PolSAR image data. The most suitable texture feature is selected through experiments. The combination of the selected texture features and Pauli features is used for the classification systems.
  • Train ELM, KNN, discriminant analysis, and the two SVM classifiers using training samples.
  • Optimize the weights of each individual classifier to obtain an ensemble classifier. Search the optimal weights by DE. The algorithmic steps of optimizing the weights are given in Table 2. Details are provided in Ref. 39.
  • Provide the prediction category label, Ln, of the weighted voting for each sample, n. The operation is as follows: Display Formula
    Ln=argmaxji=1D(gji×ωi),(10)
    where gji is the binary variable. If the i’th base classifier classifies sample n in the j’th category, then gji=1; otherwise, gji=0. In addition, ωi is the weight of the i’th base classifier of the ensemble classifier.

Graphic Jump Location
Fig. 3
F3 :

Proposed classification scheme with an ensemble of ELM, KNN, discriminant analysis, RBF SVM, and sigmoid SVM classifiers.

To verify the performance of the proposed PolSAR image classification scheme, we adopted airborne PolSAR data of San Francisco Bay. The PolSAR data were presented with no header in the STK-MLC format with 900  rows×1024  columns. They provided coverage of three classes: urban areas, ocean, and vegetation [Fig. 4(a)].

Graphic Jump Location
Fig. 4
F4 :

Pseudocolor synthetic map of (a) Pauli decomposition, (b) training sample map, and (c) test sample map corresponding to the experimental dataset.

First, we selected five groups of 20×20  pixels from each category and composed the initial training sample set. Each class had 2000 training samples. The building unit block size is shown in Fig. 4(b). We determined the testing accuracy rate to more accurately compare the various methods. Owing to the lack of a real and reliable terrain map, we selected 112,500 pixels that could be specified as the labels of test samples. The image-marked test samples are shown in Fig. 4(c). In the DE algorithm, the choice of DE parameters can significantly influence the optimal performance. For simplicity, we set factor F=0.5, crossover rate=0.9, population N=30, and maximum iteration number Mmax=100. All experiments were implemented by MATLAB software on a Windows 10 64-bit system with an Intel(R) Core(TM) i7-4790 CPU at 3.60 GHz and 16 GB of RAM.

Experiment 1

To achieve feature selection, we compared the effectiveness in different composite approaches with polarimetric and Gabor features using the classifier based on discriminant analysis. This classifier is a highly efficient algorithm for building image classifiers. We used the “linear” discriminant type in the experiment. We randomly selected 100 samples from the initial training sample set as the training sample set.

The composition schemes were: (a) Pauli features, (b) Pauli features and the first principal component feature of the Gabor feature reduced in dimension by PCA, (c) Pauli features and the third principal component feature of the Gabor feature reduced in dimension by PCA, (d) Pauli features and the second principal component feature of the Gabor feature reduced in dimension by PCA, (e) Pauli features and the fourth principal component feature of the Gabor feature reduced in dimension by PCA, and (f) Pauli features and the fifth principal component feature of the Gabor feature reduced in dimension by PCA. The classification accuracies using different features were evaluated, as shown in Fig. 5 and Table 3, where textural features change the classification accuracies on the three terrain areas.

Graphic Jump Location
Fig. 5
F5 :

Classification accuracy maps obtained by the discriminant analysis classifier using different features when the Gabor feature was reduced to five dimensions by PCA. (a) Pauli features; (b) Pauli features and the first principal component feature of the Gabor feature reduced in dimension by PCA; (c) Pauli features and the third principal component feature of the Gabor feature reduced in dimension by PCA; (d) Pauli features and the second principal component feature of the Gabor feature reduced in dimension by PCA; (e) Pauli features and the fourth principal component feature of the Gabor feature reduced in dimension by PCA; (f) Pauli features and the fifth principal component feature of the Gabor feature reduced in dimension by PCA. (“Linear” specifies the discriminant type.)

Table Grahic Jump Location
Table 3Classification accuracies of the discriminant analysis classifier in different compositions of features when the Gabor feature is reduced to different dimensions. [Accuracy (%); “linear” specifies the discriminant type.]

The single principal component feature of the Gabor feature reduced in dimension by PCA is important in distinguishing the different terrain pixels. To elucidate this importance, we considered different composition schemes of Pauli features and five principal component features of the Gabor feature reduced in dimension by PCA for each pixel. Figure 5 shows the terrain classification results using different composite approaches with Pauli features and Gabor features when the dimensionality of the Gabor feature is reduced to 5. The classification effects with different combinations of features when the Gabor feature is reduced in dimension are shown in Table 3.

Based on the data in Table 3, it is possible to assess the distinction ability of each composition feature. The table shows that the composition of Pauli features and the final principal component feature of the Gabor feature reduced in dimension by PCA has the best classification results (accuracy: 97.5867%). In addition, a regular pattern shows that the classification accuracies are all the same for the composite approach with Pauli features and the final principal component feature of the Gabor feature reduced in dimension.

Experiment 2

We compared the performances of eight classifiers, including the five basic classifiers, namely, ELM, KNN, discriminant analysis, and the two SVM classifiers, and the three ensemble classifiers, specifically AdaBoostM2, random forest, and the proposed method. In view of the previous experimental results, we adopted the combination of Pauli features and the fifth principal component feature of the Gabor feature reduced to five dimensions by PCA. Each pixel was represented by the four-dimensional feature vector, which was comprised of the three-dimensional Pauli feature vector and one-dimensional Gabor feature.

We randomly selected 300 samples from the initial training sample set as the training sample set. According to the experimental results (Table 4), k=5, and the discriminant analysis classifier is designated as a linear discriminant type. The AdaBoostM2 algorithm randomly selected an ensemble of 400 trees and used the default tree options. Random forest chose 500 as n (the number of trees). We applied fivefold cross-validation on the training set to adjust the SVM parameters.

Table Grahic Jump Location
Table 4Classification accuracies of the eight classifiers [accuracy (%)].

The classification maps of the eight classifying methods are shown in Fig. 6. To obtain an improved predictive accuracy measure, we compared these methods using the average of the five estimates for the ELM classifier, random forest classifier, and the proposed classification method. We employed the best estimate value for the KNN classifier, discriminant analysis classifier, and AdaBoostM2 algorithm when k=5; the discriminant analysis classifier was linear, and the AdaBoostM2 algorithm randomly selected an ensemble of 400 trees. Table 4 shows the precision values of the eight classifiers.

Graphic Jump Location
Fig. 6
F6 :

Classification maps of eight classifiers: (a) KNN classifier, (b) discriminant analysis classifier, (c) sigmoid SVM classifier, (d) RBF SVM classifier, (e) ELM classifier, (f) AdaBoostM2, (g) random forest, and (h) proposed classification method.

Note: (1) k denotes the number of nearest neighbors. (2) m specifies the discriminant type: m=1, “linear;” m=2, “quadratic;” m=3, “diagQuadratic;” m=4, “diagLinear;” m=5, “pseudoQuadratic.” (3) n denotes the number of trees arbitrarily chosen for AdaBoostM2. (4) Accuracy (h times) denotes the accuracy of the first h experiments (h=1, 2, 3, 4, 5).

Figure 6 and Table 4 show that the eight classifiers can better distinguish the uniform regions corresponding to the primary scattering class, such as the urban areas, vegetation, and ocean. The common limitation of these classifiers is that they cannot effectively distinguish vegetation regions from urban areas. In Table 4, the overall precision of the eight classifiers is compared using the training and testing areas (Fig. 4). For the individual classifier, the classification accuracies of the ELM and discriminant analysis classifiers are higher. Generally speaking, the proposed ensemble method outperforms the AdaBoostM2 algorithm with a high accuracy, and it has the best prediction accuracy (98.2305%) of all eight methods. Hence, by combining general machine learning and artificial intelligence classification methods, the terrain classification accuracy of the full PolSAR image can be improved to a certain extent.

In this paper, an evolutionary classification system for terrain classification of PolSAR imagery based on ensemble learning with polarimetric and texture features was proposed. For supervised terrain classification of PolSAR imagery, our classification approach is based on a weighted voting ensemble using the composite of Pauli features and the fifth principal component feature of the Gabor feature reduced in dimension by PCA. The terrain classification accuracy of PolSAR imagery heavily relies on the image features. Our approach thus leverages the texture features in addition to the original polarimetric features.

The Gabor transform method is a standard technique for extracting texture features from remote sensing images. To identify the discrepancies between neighboring pixels, we analyzed different Gabor transform features that are reduced to different dimensions by PCA in different composite approaches with polarimetric features. The most effective combination of features was selected for PolSAR imagery terrain classification.

The proposed method can effectively classify terrain in PolSAR imagery. This is because the five base classifiers for combining the ensemble classifier are distinct from each other and have strong complementarity. Moreover, we can obtain each base classifier weight by a DE algorithm to automatically stop the computation. This capability provides considerable practicability for PolSAR image processing.

Experimental results confirmed that our approach consistently outperformed the existing approaches. Nevertheless, our method is limited in that the terrain classification accuracy may decrease for very complex scenes. The basic reason for this limitation is that the classification method employs a simple selection method of training samples that may be insufficient for providing a reasonably accurate classification for some multiple scenes. Consequently, in future work, we intend to explore a more appropriate algorithm for selecting training samples that can more precisely depict multiple scenes of PolSAR imagery. In addition, we will examine the combination of other known advanced information types to obtain a more accurate classification result.

This work is partially supported by the National Science Foundation of China under Grant No. 61373063 and the Doctoral Foundation of Southwest University of Science and Technology under Grant No. 11zx711901. The author declares that there are no conflicts of interest with the publication of this paper.

Mott  H., Remote Sensing with Polarimetric Radar. ,  Wiley ,  New York  (2007).CrossRef
Massonnet  D., and Souyris  J.-C., “SAR polarimetry: towards the ultimate characterization of targets,” in Imaging with Synthetic Aperture Radar. , pp. 229 –272,  CRC Press, Taylor & Francis Group ,  Boca Raton, Florida  (2008).CrossRef
Lee  J.-S., and Pottier  E., Polarimetric Radar Imaging: From Basics to Applications. ,  CRC Press ,  Taylor & Francis Group ,  Boca Raton, Florida  (2009).CrossRef
Cloude  S. R., Polarisation: Applications in Remote Sensing. ,  Oxford University Press ,  Oxford  (2009).CrossRef
van Zyl  J. J., and Kim  Y., Synthetic Aperture Radar Polarimetry. ,  Wiley ,  New York  (2011).CrossRef
Betbeder  J.  et al., “Contribution of multitemporal polarimetric synthetic aperture radar data for monitoring winter wheat and rapeseed crops,” J. Appl. Remote Sens.. 10, (2 ), 026020  (2016).CrossRef
Ouchi  K., “Recent trend and advance of synthetic aperture radar with selected topics,” Remote Sens.. 5, , 716 –807 (2013).CrossRef
McNairn  H.  et al., “The contribution of ALOS PALSAR multipolarization and polarimetric data to crop classification,” IEEE Trans. Geosci. Remote Sens.. 47, (12 ), 3981 –3992 (2009). 0196-2892 CrossRef
Marino  A., , Cloude  S. R., and Woodhouse  I. H., “A polarimetric target detector using the Huynen fork,” IEEE Trans. Geosci. Remote Sens.. 48, (5 ), 2357 –2366 (2010). 0196-2892 CrossRef
Antropov  O., , Rauste  Y., and Hame  T., “Volume scattering modeling in PolSAR decompositions: study of ALOS PALSAR data over boreal forest,” IEEE Trans. Geosci. Remote Sens.. 49, (10 ), 3838 –3848 (2011). 0196-2892 CrossRef
Yamaguchi  Y.  et al., “Four-component scattering power decomposition with rotation of coherency matrix,” IEEE Trans. Geosci. Remote Sens.. 49, (6 ), 2251 –2258 (2011). 0196-2892 CrossRef
Shi  J.  et al., “Unsupervised polarimetric synthetic aperture radar image classification based on sketch map and adaptive Markov random field,” J. Appl. Remote Sens.. 10, (2 ), 025008  (2016).CrossRef
Li  K.  et al., “Polarimetric decomposition with RADARSAT-2 for rice mapping and monitoring,” Can. J. Remote Sens.. 38, (2 ), 169 –179 (2012). 0703-8992 CrossRef
Sugimoto  M., , Ouchi  K., and Nakamura  Y., “Four-component scattering power decomposition algorithm with rotation of covariance matrix using ALOS-PALSAR polarimetric data,” Remote Sens.. 4, (12 ), 2199 –2209 (2012).CrossRef
Watanabe  M.  et al., “ALOS/PALSAR full polarimetric observations of the Iwate-Miyagi Nairiku earthquake of 2008,” Int. J. Remote Sens.. 33, (4 ), 1234 –1245 (2012). 0143-1161 CrossRef
Yonezawa  C., , Watanabe  M., and Saito  G., “Polarimetric decomposition analysis of ALOS PALSAR observation data before and after a landslide event,” Remote Sens.. 4, (12 ), 2314 –2328 (2012).CrossRef
Kaplan  L. M., “Extended fractal analysis for texture classification and segmentation,” IEEE Trans. Image Process.. 8, (11 ), 1572 –1585 (1999). 1057-7149 CrossRef
Hsieh  J.-W.  et al., “Automatic traffic surveillance system for vehicle tracking and classification,” IEEE Trans. Intell. Transp. Syst.. 7, (2 ), 175 –187 (2006).CrossRef
Sanin  A., , Sanderson  C., and Lovell  B. C., “Shadow detection: a survey and comparative evaluation of recent methods,” Pattern Recognit.. 45, (4 ), 1684 –1695 (2012).CrossRef
Leone  A., and Distante  C., “Shadow detection for moving objects based on texture analysis,” Pattern Recognit.. 40, (4 ), 1222 –1233 (2007).CrossRef
Ojala  T., and Pietikäinen  M., “Unsupervised texture segmentation using feature distributions,” Pattern Recognit.. 32, (3 ), 477 –486 (1999).CrossRef
Clausi  D. A., and Yue  B., “Comparing co-occurrence probabilities and Markov random fields for texture analysis of SAR sea ice imagery,” IEEE Trans. Geosci. Remote Sens.. 42, (1 ), 215 –228 (2004). 0196-2892 CrossRef
Paola  J. D., and Schowengerdt  R. A., “A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification,” IEEE Trans. Geosci. Remote Sens.. 33, (4 ), 981 –996 (1995). 0196-2892 CrossRef
Del Frate  F.  et al., “Use of neural networks for automatic classification from high-resolution images,” IEEE Trans. Geosci. Remote Sens.. 45, (4 ), 800 –809 (2007). 0196-2892 CrossRef
Moustakidis  S.  et al., “SVM-based fuzzy decision trees for classification of high spatial resolution remote sensing images,” IEEE Trans. Geosci. Remote Sens.. 50, (1 ), 149 –169 (2012). 0196-2892 CrossRef
Deng  H., and Clausi  D. A., “Unsupervised segmentation of synthetic aperture radar sea ice imagery using a novel Markov random field model,” IEEE Trans. Geosci. Remote Sens.. 43, (3 ), 528 –538 (2005). 0196-2892 CrossRef
Melgani  F., and Bruzzone  L., “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Trans. Geosci. Remote Sens.. 42, (8 ), 1778 –1790 (2004). 0196-2892 CrossRef
Waske  B., and Benediktsson  J. A., “Fusion of support vector machines for classification of multisensor data,” IEEE Trans. Geosci. Remote Sens.. 45, (12 ), 3858 –3866 (2007). 0196-2892 CrossRef
Muñoz-Marí  J.  et al., “Semisupervised one-class support vector machines for classification of remote sensing data,” IEEE Trans. Geosci. Remote Sens.. 48, (8 ), 3188 –3197 (2010). 0196-2892 CrossRef
Cover  T. M., and Hart  P. E., “Nearest neighbor pattern classification,” IEEE Trans. Inf. Theory. 13, (1 ), 21 –27 (1967). 0018-9448 CrossRef
Li  T., , Zhu  S., and Ogihara  M., “Using discriminant analysis for multi-class classification: an experimental investigation,” Knowl. Inf. Syst.. 10, (4 ), 453 –472 (2006).CrossRef
Fu  J. C.  et al., “Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging,” Comput. Med. Imaging Graph.. 34, (4 ), 308 –320 (2010).CrossRef
Hassanien  A. E., and Kim  T., “Breast cancer MRI diagnosis approach using support vector machine and pulse coupled neural network,” J. Appl. Logic. 10, (4 ), 277 –284 (2012).CrossRef
Huang  G.-B., , Zhu  Q.-Y., and Siew  C.-K., “Extreme learning machine: theory and applications,” Neurocomputing. 70, (1–3 ), 489 –501 (2006).CrossRef
Daugman  J. G., “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. A. 2, (7 ), 1160 –1169 (1985). 0740-3232 CrossRef
Malik  J., and Perona  P., “Preattentive texture discrimination with early vision mechanisms,” J. Opt. Soc. Am. A. 7, (5 ), 923 –932 (1990). 0740-3232 CrossRef
Kimura  H., “Calibration of polarimetric PALSAR imagery affected by Faraday rotation using polarization orientation,” IEEE Trans. Geosci. Remote Sens.. 47, (12 ), 3943 –3950 (2009). 0196-2892 CrossRef
Krogh  A., , Vedelsby  J., “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems. , , Tesauro  G., , Touretzky  D. S., and Leen  T. K., Eds., pp. 231 –238,  MIT Press ,  Cambridge, Massachusetts  (1995).
Zhang  Y.  et al., “A weighted voting classifier based on differential evolution,” Abstr. Appl. Anal.. 28, , 36 –42 (2014).CrossRef

Chuanbo Huang is an associate professor of computer engineering with Southwest University of Science and Technology, Mianyang, China. He received his PhD in control science and engineering from Nanjing University of Science and Technology, Nanjing, China, in 2011. His current research interests include image processing, image modeling, segmentation, and tissue characterization, with a particular interest in applications to remote sensing.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Chuanbo Huang
"Terrain classification of polarimetric synthetic aperture radar imagery based on polarimetric features and ensemble learning", J. Appl. Remote Sens. 11(2), 026002 (Apr 03, 2017). ; http://dx.doi.org/10.1117/1.JRS.11.026002


Figures

Graphic Jump Location
Fig. 1
F1 :

Original grayscale image, obtained from the red/green/blue (RGB) composites of Pauli decomposition, and five principal component images: (a) original grayscale image; (b) first principal component image; (c) third principal component image; (d) second principal component image; (e) fourth principal component image; and (f) fifth principal component image.

Graphic Jump Location
Fig. 6
F6 :

Classification maps of eight classifiers: (a) KNN classifier, (b) discriminant analysis classifier, (c) sigmoid SVM classifier, (d) RBF SVM classifier, (e) ELM classifier, (f) AdaBoostM2, (g) random forest, and (h) proposed classification method.

Graphic Jump Location
Fig. 2
F2 :

Pauli decomposition map and its pseudocolor composite image: (a) SHH+SVV, (b) SHHSVV, (c) 2SHV, and (d) pseudocolor composite.

Graphic Jump Location
Fig. 3
F3 :

Proposed classification scheme with an ensemble of ELM, KNN, discriminant analysis, RBF SVM, and sigmoid SVM classifiers.

Graphic Jump Location
Fig. 4
F4 :

Pseudocolor synthetic map of (a) Pauli decomposition, (b) training sample map, and (c) test sample map corresponding to the experimental dataset.

Graphic Jump Location
Fig. 5
F5 :

Classification accuracy maps obtained by the discriminant analysis classifier using different features when the Gabor feature was reduced to five dimensions by PCA. (a) Pauli features; (b) Pauli features and the first principal component feature of the Gabor feature reduced in dimension by PCA; (c) Pauli features and the third principal component feature of the Gabor feature reduced in dimension by PCA; (d) Pauli features and the second principal component feature of the Gabor feature reduced in dimension by PCA; (e) Pauli features and the fourth principal component feature of the Gabor feature reduced in dimension by PCA; (f) Pauli features and the fifth principal component feature of the Gabor feature reduced in dimension by PCA. (“Linear” specifies the discriminant type.)

Tables

Table Grahic Jump Location
Table 2Algorithmic steps for DE optimization.
Table Grahic Jump Location
Table 1Algorithmic steps for ELM.
Table Grahic Jump Location
Table 3Classification accuracies of the discriminant analysis classifier in different compositions of features when the Gabor feature is reduced to different dimensions. [Accuracy (%); “linear” specifies the discriminant type.]
Table Grahic Jump Location
Table 4Classification accuracies of the eight classifiers [accuracy (%)].

References

Mott  H., Remote Sensing with Polarimetric Radar. ,  Wiley ,  New York  (2007).CrossRef
Massonnet  D., and Souyris  J.-C., “SAR polarimetry: towards the ultimate characterization of targets,” in Imaging with Synthetic Aperture Radar. , pp. 229 –272,  CRC Press, Taylor & Francis Group ,  Boca Raton, Florida  (2008).CrossRef
Lee  J.-S., and Pottier  E., Polarimetric Radar Imaging: From Basics to Applications. ,  CRC Press ,  Taylor & Francis Group ,  Boca Raton, Florida  (2009).CrossRef
Cloude  S. R., Polarisation: Applications in Remote Sensing. ,  Oxford University Press ,  Oxford  (2009).CrossRef
van Zyl  J. J., and Kim  Y., Synthetic Aperture Radar Polarimetry. ,  Wiley ,  New York  (2011).CrossRef
Betbeder  J.  et al., “Contribution of multitemporal polarimetric synthetic aperture radar data for monitoring winter wheat and rapeseed crops,” J. Appl. Remote Sens.. 10, (2 ), 026020  (2016).CrossRef
Ouchi  K., “Recent trend and advance of synthetic aperture radar with selected topics,” Remote Sens.. 5, , 716 –807 (2013).CrossRef
McNairn  H.  et al., “The contribution of ALOS PALSAR multipolarization and polarimetric data to crop classification,” IEEE Trans. Geosci. Remote Sens.. 47, (12 ), 3981 –3992 (2009). 0196-2892 CrossRef
Marino  A., , Cloude  S. R., and Woodhouse  I. H., “A polarimetric target detector using the Huynen fork,” IEEE Trans. Geosci. Remote Sens.. 48, (5 ), 2357 –2366 (2010). 0196-2892 CrossRef
Antropov  O., , Rauste  Y., and Hame  T., “Volume scattering modeling in PolSAR decompositions: study of ALOS PALSAR data over boreal forest,” IEEE Trans. Geosci. Remote Sens.. 49, (10 ), 3838 –3848 (2011). 0196-2892 CrossRef
Yamaguchi  Y.  et al., “Four-component scattering power decomposition with rotation of coherency matrix,” IEEE Trans. Geosci. Remote Sens.. 49, (6 ), 2251 –2258 (2011). 0196-2892 CrossRef
Shi  J.  et al., “Unsupervised polarimetric synthetic aperture radar image classification based on sketch map and adaptive Markov random field,” J. Appl. Remote Sens.. 10, (2 ), 025008  (2016).CrossRef
Li  K.  et al., “Polarimetric decomposition with RADARSAT-2 for rice mapping and monitoring,” Can. J. Remote Sens.. 38, (2 ), 169 –179 (2012). 0703-8992 CrossRef
Sugimoto  M., , Ouchi  K., and Nakamura  Y., “Four-component scattering power decomposition algorithm with rotation of covariance matrix using ALOS-PALSAR polarimetric data,” Remote Sens.. 4, (12 ), 2199 –2209 (2012).CrossRef
Watanabe  M.  et al., “ALOS/PALSAR full polarimetric observations of the Iwate-Miyagi Nairiku earthquake of 2008,” Int. J. Remote Sens.. 33, (4 ), 1234 –1245 (2012). 0143-1161 CrossRef
Yonezawa  C., , Watanabe  M., and Saito  G., “Polarimetric decomposition analysis of ALOS PALSAR observation data before and after a landslide event,” Remote Sens.. 4, (12 ), 2314 –2328 (2012).CrossRef
Kaplan  L. M., “Extended fractal analysis for texture classification and segmentation,” IEEE Trans. Image Process.. 8, (11 ), 1572 –1585 (1999). 1057-7149 CrossRef
Hsieh  J.-W.  et al., “Automatic traffic surveillance system for vehicle tracking and classification,” IEEE Trans. Intell. Transp. Syst.. 7, (2 ), 175 –187 (2006).CrossRef
Sanin  A., , Sanderson  C., and Lovell  B. C., “Shadow detection: a survey and comparative evaluation of recent methods,” Pattern Recognit.. 45, (4 ), 1684 –1695 (2012).CrossRef
Leone  A., and Distante  C., “Shadow detection for moving objects based on texture analysis,” Pattern Recognit.. 40, (4 ), 1222 –1233 (2007).CrossRef
Ojala  T., and Pietikäinen  M., “Unsupervised texture segmentation using feature distributions,” Pattern Recognit.. 32, (3 ), 477 –486 (1999).CrossRef
Clausi  D. A., and Yue  B., “Comparing co-occurrence probabilities and Markov random fields for texture analysis of SAR sea ice imagery,” IEEE Trans. Geosci. Remote Sens.. 42, (1 ), 215 –228 (2004). 0196-2892 CrossRef
Paola  J. D., and Schowengerdt  R. A., “A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification,” IEEE Trans. Geosci. Remote Sens.. 33, (4 ), 981 –996 (1995). 0196-2892 CrossRef
Del Frate  F.  et al., “Use of neural networks for automatic classification from high-resolution images,” IEEE Trans. Geosci. Remote Sens.. 45, (4 ), 800 –809 (2007). 0196-2892 CrossRef
Moustakidis  S.  et al., “SVM-based fuzzy decision trees for classification of high spatial resolution remote sensing images,” IEEE Trans. Geosci. Remote Sens.. 50, (1 ), 149 –169 (2012). 0196-2892 CrossRef
Deng  H., and Clausi  D. A., “Unsupervised segmentation of synthetic aperture radar sea ice imagery using a novel Markov random field model,” IEEE Trans. Geosci. Remote Sens.. 43, (3 ), 528 –538 (2005). 0196-2892 CrossRef
Melgani  F., and Bruzzone  L., “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Trans. Geosci. Remote Sens.. 42, (8 ), 1778 –1790 (2004). 0196-2892 CrossRef
Waske  B., and Benediktsson  J. A., “Fusion of support vector machines for classification of multisensor data,” IEEE Trans. Geosci. Remote Sens.. 45, (12 ), 3858 –3866 (2007). 0196-2892 CrossRef
Muñoz-Marí  J.  et al., “Semisupervised one-class support vector machines for classification of remote sensing data,” IEEE Trans. Geosci. Remote Sens.. 48, (8 ), 3188 –3197 (2010). 0196-2892 CrossRef
Cover  T. M., and Hart  P. E., “Nearest neighbor pattern classification,” IEEE Trans. Inf. Theory. 13, (1 ), 21 –27 (1967). 0018-9448 CrossRef
Li  T., , Zhu  S., and Ogihara  M., “Using discriminant analysis for multi-class classification: an experimental investigation,” Knowl. Inf. Syst.. 10, (4 ), 453 –472 (2006).CrossRef
Fu  J. C.  et al., “Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging,” Comput. Med. Imaging Graph.. 34, (4 ), 308 –320 (2010).CrossRef
Hassanien  A. E., and Kim  T., “Breast cancer MRI diagnosis approach using support vector machine and pulse coupled neural network,” J. Appl. Logic. 10, (4 ), 277 –284 (2012).CrossRef
Huang  G.-B., , Zhu  Q.-Y., and Siew  C.-K., “Extreme learning machine: theory and applications,” Neurocomputing. 70, (1–3 ), 489 –501 (2006).CrossRef
Daugman  J. G., “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. A. 2, (7 ), 1160 –1169 (1985). 0740-3232 CrossRef
Malik  J., and Perona  P., “Preattentive texture discrimination with early vision mechanisms,” J. Opt. Soc. Am. A. 7, (5 ), 923 –932 (1990). 0740-3232 CrossRef
Kimura  H., “Calibration of polarimetric PALSAR imagery affected by Faraday rotation using polarization orientation,” IEEE Trans. Geosci. Remote Sens.. 47, (12 ), 3943 –3950 (2009). 0196-2892 CrossRef
Krogh  A., , Vedelsby  J., “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems. , , Tesauro  G., , Touretzky  D. S., and Leen  T. K., Eds., pp. 231 –238,  MIT Press ,  Cambridge, Massachusetts  (1995).
Zhang  Y.  et al., “A weighted voting classifier based on differential evolution,” Abstr. Appl. Anal.. 28, , 36 –42 (2014).CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.