Research Papers

Improved color texture descriptors for remote sensing image retrieval

[+] Author Affiliations
Zhenfeng Shao

Wuhan University, State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan 430079, China

Weixun Zhou

Wuhan University, State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan 430079, China

Lei Zhang

Wuhan University, State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan 430079, China

Jihu Hou

Wuhan University, State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan 430079, China

J. Appl. Remote Sens. 8(1), 083584 (Jul 24, 2014). doi:10.1117/1.JRS.8.083584
History: Received January 16, 2014; Revised May 26, 2014; Accepted June 27, 2014
Text Size: A A A

Open Access Open Access

Abstract.  Texture features are widely used in image retrieval literature. However, conventional texture features are extracted from grayscale images without taking color information into consideration. We present two improved texture descriptors, named color Gabor wavelet texture (CGWT) and color Gabor opponent texture (CGOT), respectively, for the purpose of remote sensing image retrieval. The former consists of unichrome features computed from color channels independently and opponent features computed across different color channels at different scales, while the latter consists of Gabor texture features and opponent features mentioned above. The two representations incorporate discriminative information among color bands, thus describing well the remote sensing images that have multiple objects. Experimental results demonstrate that CGWT yields better performance compared to other state-of-the-art texture features, and CGOT not only improves the retrieval results of some image classes that have unsatisfactory performance using CGWT representation, but also increases the average precision of all queried images further. In addition, a similarity measure function for proposed representation CGOT has been defined to give a convincing evaluation.

Figures in this Article

Texture features have shown significant advantages in the field of image classification,1 image segmentation,2 and content-based image retrieval (CBIR),3,4 etc. Particularly, texture features are one kind of low-level features which have been widely used in CBIR community because of characteristics of independence of image color and intensity.

Some popular textural descriptors, such as gray level co-occurrence matrix (GLCM),5 Gabor filter,6 wavelet transform,7 and local binary pattern (LBP),8 have been extensively used in CBIR community. Unfortunately, these conventional texture features mentioned above are extracted from grayscale images directly and leave the discriminative information derived from different color channels, which can be regarded as complementary information for different texture patterns, out of consideration. With the intention of fully exploiting the discriminative information to improve the retrieval results of remote sensing images, many studies have been conducted on this topic.

Strategies of such research can be roughly divided into two categories: (1) combination of color and texture features and (2) texture features integrating opponent process theory. Some works based on the former strategy are illustrated as follows. Lin et al.9 proposed a smart CBIR system based on color and texture features. Chun et al.10 presented a CBIR method based on a combination of color and texture features extracted in multiresolution wavelet domain. Liapis and Tziritas11 illustrated a new image retrieval mechanism based on a combination of texture and color features using discrete wavelet frames analysis and one-dimensional histograms of CIELab chromaticity coordinates, respectively. This strategy has also been accepted as one of the important retrieval mechanisms in some famous image retrieval systems, such as query by image content (QBIC).12,13 Some other similar works1417 could be found in this research. Although these works simultaneously take discriminative information and texture features into consideration, problems such as computational complexity and definition of weight parameters with combinational features are still an open question. In the 1800s, Hurvich and Jameson18 proposed an opponent process theory of human color vision, and thus texture features integrating opponent process theory have increasingly drawn substantial attention in recent years. Jain and Healey19 proposed a multiscale representation based on the opponent process theory for texture recognition, and later this method was applied to hyperspectral image texture recognition.20 In one recent work by Choi et al.,21 two features, namely color local Gabor wavelets and color LBP, are proposed for the purpose of face recognition, which share similar principles and can be treated as an extensive application of the theory proposed in 19. The opponent process theory provides complementary information among color channels and generates a simple but effective feature representation.

Motivated by the aforementioned applications of opponent process theory, in this study, we propose one descriptor named color Gabor wavelet texture (CGWT) for remote sensing image retrieval. Meanwhile, color Gabor opponent texture (CGOT) descriptor based on Gabor wavelets has also been presented so as to improve the retrieval results of certain image classes which have inferior precision using CGWT representation.

The rest of this study is organized as follows. Section 2 shows the framework of remote sensing image retrieval based on the proposed descriptors and illustrates the details of the proposed features, parameters used, and similarity measure defined for CGOT descriptor. In Sec. 3, comparative experimental results and discussions are presented. Conclusions and future work constitute Sec. 4.

Framework of Remote Sensing Image Retrieval Based on the Proposed Descriptors

Generally an image retrieval system contains image database, feature database, and some important functional modules, such as feature extraction, indexing mechanism, and similarity measure. Figure 1 illustrates the framework of improved color texture descriptors for remote sensing image retrieval in this study, which mainly contains two parts: feature extraction and image retrieval.

Graphic Jump LocationF1 :

Architecture of remote sensing image retrieval based on proposed descriptors.

Feature extraction part indicates the extraction procedure of the proposed features. Given one RGB remote sensing image, three dependent color channel images, R, G, and B are obtained first. Then unichrome features corresponding to each color channel are extracted based on Gabor filter with orientation and scale (u,v). Finally, R unichrome feature, G unichrome feature, and B unichrome feature are combined together to form unichrome feature. For opponent feature, two Gabor filters with orientation and scale (u,v) and (u,v) are used for two color channel images, respectively. As with unichrome feature, RG opponent feature, RB opponent feature, and GB opponent feature are combined together to form opponent features.

Image retrieval part illustrates a simple procedure of remote sensing image retrieval. All images and features are stored in image database and feature database, respectively. Meanwhile, images are associated with the corresponding features through an indexing mechanism. Given one query image, distances between query image and images in database are calculated using a predefined similarity measure, and then the first k most similar images are returned in ascending or descending order of similarity.

Feature extraction is an important and indispensable part in one image retrieval system. In Sec. 2.2, the details of extraction of proposed representations are illustrated. In addition, as the most important procedure of image retrieval part, similarity measure methods used in this study are discussed in Sec. 2.3 as well.

Feature Extraction

In our methodology, all images are represented in RGB color space for convenience. Both CGWT and CGOT features are based on Gabor filter, illustrated as follows: Display Formula

ψu,v(z)=ku,ν2σ2e(ku,v2z2/2σ2)[eiku,vzeσ2/2],(1)
where u and ν are mean orientation and scale parameters of Gabor kernels, respectively, z=(x,y), means the norm operator, and ku,v is defined as follows: Display Formula
ku,v=kveiϕu,(2)
where kv=kmax/fv and ϕu=πu/8. kmax is the maximum frequency, f is the spacing factor, and σ is the standard deviation.

Note that the Gabor filter may have many formula forms, and Eq. (1) in 22 is chosen because of its conciseness and convenience for setting parameters, such as direction and scale, in our algorithm.

Extraction of CGWT descriptor

As illustrated in Fig. 1, CGWT representation consists of two parts, unichrome feature and opponent feature. The terms “unichrome feature” and “opponent feature” follow the definition in 19, where you can find detailed information about the two features. Let R, G, and B be the three grayscale images of corresponding color channels of an RGB image, respectively. The convolution results of three grayscale images and Gabor kernel ψu,v are denoted as follows: Display Formula

{Ru,v(z)=R(z)*ψu,v(z)Gu,v(z)=G(z)*ψu,v(z)Bu,v(z)=B(z)*ψu,v(z),(3)
where z has the same meaning as Eq. (1) and * means convolution operator. Ru,v(z), Gu,v(z), and Bu,v(z) are the convolution results of three grayscale images with orientation u and scale v. Then, unichrome feature representation of one color image is represented by Display Formula
uni=[Ru,v2(z),Gu,v2(z),Bu,v2(z)],(4)
where the three components of unichrome feature are R unichrome feature, G unichrome feature, and B unichrome feature, respectively. It is clear that unichrome features are defined as values extracted from a single image band.

Then, the difference of normalized Ru,v(z), Gu,v(z), Bu,v(z) is defined by Display Formula

{RGuvv=Ru,v(z)Ru,v2(z)Gu,v(z)Ru,v2(z)RBuvv=Ru,v(z)Ru,v2(z)Bu,v(z)Bu,v2(z)GBuvv=Gu,v(z)Gu,v2(z)Bu,v(z)Bu,v2(z),(5)
where u and ν, v denote the orientation and scales of Gabor filters used, respectively. Note that according to Gabor kernels in Eq. (1), we choose ν and v as adjacent scales, which means they should meet restriction condition |vv|1. Then, opponent features can be defined by Display Formula
opp=[RGu,v,v2(z),RBu,v,v2(z),GBu,v,v2(z)],(6)
where the three components of opponent features are RG opponent feature, RB opponent feature, and GB opponent feature, respectively. It is clear that opponent features are defined as values extracted from the difference of two image bands.

According to Eqs. (5) and (6), we can obtain three equations in Eq. (7). During the feature extraction procedure, dimension and efficiency are the two factors needed to be considered, while in the work by Jain and Healey19 and Choi et al.,21 the above factors are not taken into consideration. In our study, we just choose three of them in Eq. (7) to constitute opponent feature so as to decrease feature dimension and increase efficiency. Finally, the CGWT representation of an image is denoted by Eq. (8) Display Formula

{RGu,v,v2(z)=GRu,v,v2(z)RBu,v,v2(z)=BRu,v,v2(z)GBu,v,v2(z)=BGu,v,v2(z),(7)
Display Formula
CGWT={uni,opp}.(8)

Extraction of CGOT descriptor

CGOT representation combines Gabor texture6 and opponent feature together, which substantially decreases the feature dimension compared with CGWT representation. Given one grayscale image I, then the convolution of I and Gabor kernels ψu,v with orientation u and scale v is given by Display Formula

gu,v(z)=I(z)*ψu,v(z).(9)

The mean μuv and standard deviation σuv of the transform coefficients are defined by Display Formula

{μu,v=|gu,v(x,y)|dxdyσu,v=(|gu,v(x,y)|μu,v)2dxdy.(10)

Gabor texture feature composed of μuv and σuv is denoted using T={μu,v,σu,v}. Then, CGOT representation of an image is denoted by Display Formula

CGOT={μuv,σuv,opp}.(11)

Extraction of comparative texture features

Some widely used traditional texture features, such as wavelet texture, LBP, and GLCM, are introduced as comparative methods to give a quantitative analysis. Before extraction of these features, the color images are converted into intensity images using the equation gray=0.299*r+0.587*g+0.114*b, where r, g, and b mean red, green, and blue channels, respectively. Details about these comparative methods are in the following.

Wavelet transform makes a great difference in the field of texture analysis. Let I be an original image, and then the extraction procedure is described as follows. First, “haar” wavelet is used to construct two decomposition filters, one low-pass filter, and one high-pass filter. Then, 2-level two-dimensional wavelet decomposition is applied to I by means of above constructed decomposition filters, and six subband images are obtained. Note that the decomposition level is an important parameter and the size of the smallest subimage should not be less than 16×16.7 Finally, the energy of each subband image is calculated using the following equation: Display Formula

E=1MNm=1Mn=1N|x(m,n)|,(12)
where x(m,n) are subband images, M×N is the size of original image, 1mM, 1nN. Also, wavelet texture feature of image I is defined by mean and standard deviation of energy of each subband image using T={μ1,σ1,μ2,σ2,,μ6,σ6}.

LBP describes the local structure of image texture through calculating the differences between each image pixel and its neighboring pixels. Ojala et al.8 improved an original LBP operator and developed a generalized grayscale and rotation invariant operator LBPP,Rriu2 which can detect “uniform” patterns and is denoted by Display Formula

LBPP,Rriu2={p=0P1s(gpgc)U(LBPP,R)2P+1U(LBPP,R)>2,(13)
where U(LBPP,R)=|s(gP1gc)s(g0gc)|+p=1P1|s(gpgc)s(gp1gc)|, and s(x) is defined by Display Formula
s(x)={1x00x<0.(14)
R is the radius of the circularly symmetric neighbor set and P is the number of equally spaced pixels on the circle. gc is the center pixel of circular neighbor and gp(p=0,1,,P1) are the neighbor pixels on the circle. U is a uniformity measure corresponding to the number of spatial transitions in the “pattern” and riu2 stands for rotation invariant “uniform” patterns having U value of at most 2.

In our study, 8 pixels circular neighbor of radius 1, i.e., LBP8,1riu2 operator is used, and a total of 59 grayscale and rotation invariant LBP histogram is accepted.

GLCM is one widely used texture analysis method that considers spatial dependencies of gray levels from the perspective of mathematics. In the work by Haralick et al.,5 14 statistical measures extracted from GLCM are introduced. Nevertheless, many of them are strongly correlated with each other and there are no definitive conclusions about which features are more important and discriminative than others. How to choose appropriate features for texture analysis from 14 statistical measures is still studied by some researchers. Haralick et al. selected four features, energy, entropy, correlation, and contrast, as texture features and conducted classification experiments using a satellite imagery data set, and good classification results are obtained.5 Considering the good performance of the above four features on remote sensing images, energy, entropy, correlation, and contrast are used in our study. They are defined by Display Formula

{f1=ijpd,θ2(i,j)f2=ijpd,θ(i,j)logpd,θ(i,j)f3=ijijpd,θ(i,j)μxμyσxσyf4=ij(ij)2pd,θ(i,j),(15)
where f1, f2, f3, and f4 stand for energy, entropy, correlation, and contrast, respectively. μx, μy, σx, and σy are defined by μx=iijpd,θ(i,j), μy=ijjpd,θ(i,j), σx2=i(iμx)2jpd,θ(i,j), and σy2=j(jμy)2ipd,θ(i,j). (i,j)[1,Ng] is the entry in GLCMs, where Ng means the number of distinct gray levels in the quantized image, and pd,θ(i,j) is the normalized GLCM of pixel distance d and direction θ. In our study, pixel distance is set as 1 and four directions θ{0,45,90,135deg} are chosen. In addition, because the used images have 256 gray levels and excessive gray levels will increase the workload of calculating GLCMs drastically, we scale the images to eight gray levels, which means GLCM is one 8×8 symmetric matrix and Ng=8. Consequently, a total of eight texture features composed of mean and standard deviation of four features in Eq. (15) is obtained.

Parameters setting

How to choose optimal parameters for Gabor wavelets is still studied by some researchers because different parameters may result in different experimental results even for the same question. With respect to parameters used in this study, we choose default parameters used in 22, and the details are described as follows. Gabor wavelets of five scales v{0,1,2,3,4} and eight orientations u{0,1,2,3,4,5,6,7}, which have been used in most cases, are accepted because they can extract texture features from more scales and orientations. For the rest of the parameters, σ=2π, kmax=π/2, and f=2 are accepted, which can be regarded as empirical values. In addition, the size of Gabor window is also an important parameter, and it is set as 128×128 in this study. Then, a total of 80 Gabor texture features is obtained.

According to Eq. (5) and restriction condition |vv|1, we can obtain 13 scale groups of (v,v){(0,0),(1,1),(2,2),(3,3),(4,4),(0,1),(1,0),(1,2),(2,1),(2,3),(3,2),(3,4),(4,3)} and eight orientations of u. Thus, CGWT and CGOT representations are a total of 432 (120+312) and 392 (80+312) feature vectors, respectively.

Similarity Measure

Similarity measure is an indispensable and important step in image retrieval systems, and different methods may result in great difference even for identical query images. Some widely used similarity measure methods, such as Minkowski distance, histogram intersection, K-L distance, and Jeffrey divergence, etc. tend to have their own scope of application. In such cases, specific similarity measure methods are defined for certain features in this study.

Given two images Ii and Ij with corresponding CGWT representations fiCGWT and fjCGWT, the distance measure of CGWT is defined as in 19Display Formula

dijCGWT=(fiCGWTfjCGWTσCGWT)2,(16)
where σCGWT is the standard deviation of CGWT representation over the entire image database. This distance measure has been used to classify the textural images and ideal classification results have been achieved.19

For CGOT representation, considering it is the combination of Gabor texture feature and opponent feature, we integrate distance measure Eq. (16) and distance measure for Gabor texture features in 6 and define one much simpler distance measure by Display Formula

dijCGOT=|fiCGOTfjCGOTσCGOT|,(17)
where σCGOT is the standard deviation of CGOT representation over the entire image database, fiCGOT and fjCGOT are the corresponding CGOT representation of image Ii and Ij, respectively.

Note similarity measure Eq. (17) has similar form but different meanings as similarity measure Eq. (16). Since Gabor texture and opponent feature constitute CGOT representation, distance measure Eq. (17) taking both of them into consideration is appropriate. In this similarity measure, CGOT representation is regarded as a unitary feature, which means it is unnecessary to pay attention to each component of the feature when calculating standard deviation σCGOT.

Data Set

To evaluate the performance of proposed descriptors, eight land-use/land-cover (LULC) classes from UC Merced LULC data set are chosen as retrieval image database. Original LULC is one manually constructed data set consisting of 21 image classes, and the 100 images in each class are tiles with the size of 256×256 from large aerial images with the spatial resolution of 30 cm of some US regions.23 LULC data set has been used in many similar studies24,25 and made publicly available to other researchers. Some image patches of eight LULC classes used in our experiments are shown in Fig. 2. From left to right, they are agricultural, airplane, beach, buildings, chaparral, residential, forest, and harbor, respectively.

Graphic Jump LocationF2 :

Image patches of eight land-use/land-cover classes used in our experiments, from left to right: (a) agricultural, (b) airplane, (c) beach, (d) buildings, (e) chaparral, (f) residential, (g) forest, and (h) harbor.

Performance of Proposed Descriptors

Accurate and objective evaluation criteria have also been a hot topic in the CBIR community. Precision, recall, precision-recall curves, and ANMRR are publicly accepted as evaluation criteria. However, due to the existence of semantic gap, evaluation of CBIR is not effortless. In addition, it is possible to get different performances with different evaluation methods even if the same data set is used.26 In order to avoid such problems, precision and precision–recall curves are chosen as evaluation methods in this study, because they can be treated as similar evaluations from a different perspective. Precision is the fraction of correct retrievals and recall is the fraction of ground truth items retrieved for a given result set.23

Figure 3 shows the performance of proposed features and conventional texture features. The last bin of the histogram with the label “average” gives the average precision of corresponding features. The chart indicates that CGOT and CGWT representations perform better on five classes, i.e., airplane, beach, chaparral, residential, forest, and harbor, and less than perfect on the other two classes, i.e., agricultural and buildings, compared with wavelet texture. Nevertheless, the two proposed features achieve highest average precision on the whole image classes. Meanwhile, we can see CGOT feature increases the average precision of agricultural, airplane, beach, buildings, residential, and harbor by CGWT feature further, which is particularly obvious with respect to agricultural and harbor due to abundant texture information on these image classes.

Graphic Jump LocationF3 :

Average precision of each image class with the proposed features and other texture features.

In order to demonstrate the superiority of the proposed representation, precision–recall curves for different features are presented in Fig. 4 through setting different number of returned images. With the increase of returned images, precision by conventional texture features decrease rapidly, particularly GLCM and LBP. With regard to three rest features, it is evident that CGOT results in the best performance. For CGWT representation and wavelet texture, recall 0.5 can be treated as marginal value. When the value of recall is less than 0.5, CGWT representation performs better, and they have same performance with recall bigger than 0.5. Experimental results, here, are in accordance with the results in Fig. 3, and both of them have validated the effectiveness and good performance of the proposed color texture descriptors.

Graphic Jump LocationF4 :

Precision-recall curves for different features.

Comparisons of Used Similarity Measures

As aforementioned, appropriate similarity measure method is necessary in CBIR. For conventional texture features, i.e., GLCM, LBP, and wavelet texture, we choose L2 distance as similarity measure. For CGWT representation, distance measure presented in 19 is used. Also, for CGOT representation, characteristics of existed distance measure for Gabor texture, unichrome and opponent features are considered and a simpler distance measure for CGOT representation is defined. Table 1 compares the performance of CGOT representation using proposed similarity measure in Eq. (17) with some other similarity measures, such as L1 distance, L2 distance, Jeffrey divergence,27 and distance measure in 19.

Table Grahic Jump Location
Table 1Comparisons of CGOT using different distance measures.

For each group of returned images, the proposed similarity measure achieves highest precision and the average performance is best as well. Table 1 demonstrates that proposed distance measure is an appropriate and effective similarity measure method.

Examples of Remote Sensing Image Retrieval

Figure 5 shows one remote sensing image retrieval example using two proposed descriptors. Figure 5(a) is the query image from agricultural class, and Figs. 5(b) and 5(c) are the first 30 retrieved images of CGOT and CGWT, respectively. Note that these images are returned in the order of descending similarity, which means images ranking front are more similar to the query image.

Graphic Jump LocationF5 :

One retrieval example of agricultural image: (a) query image, (b) performance of color Gabor opponent texture, (c) performance of color Gabor wavelet texture.

According to the retrieval results of two descriptors, CGOT retrieves more similar images than CGWT. In addition, among the first 12 retrieved images, CGOT returns two irrelevant images, while CGWT returns five irrelevant images, which also indicates the better performance of CGOT descriptor.

Discussion

From the previous experiments of remote sensing image retrieval, some interesting points have been concluded.

  1. Proposed color texture descriptors, CGWT and CGOT, describe the content of remote sensing images well and achieve a good performance compared with wavelet texture, LBP texture, and GLCM texture. The reason is that they have taken the discriminative information among color bands into consideration.
  2. As shown in Fig. 3, CGOT improves the performance of CGWT and achieves highest average precision over the entire image database, and similar performance is obtained in Fig. 4. These results indicate that Gabor texture has better descriptive power than unichrome feature in terms of image texture.
  3. The similarity measure defined for CGOT is appropriate. It reveals that the characteristics of one feature should be taken into consideration when defining a similarity measure, because it plays an important role in improving the performance of the proposed representations.

In this study, all experiments are conducted using aerial remote sensing images from one public image database. However, not all the selected images have regular texture structure, which will have an effect on the performance of proposed descriptors. In addition, proposed descriptors are likely to be suitable for hyperspectral image retrieval because they have high spectral resolution and more discriminative information can be extracted from image bands.

With the rapid development of remote sensing technology, the amount of accessible remote sensing data has been increasing at an incredible rate, which not only provides researchers more choices for various applications, but also brings more challenges. Under the circumstances, CBIR is a better choice for effective organization and management of massive remote sensing data.

Traditionally, low-level features, particular texture features, are widely used in CBIR community for their special characteristics. Nevertheless, conventional texture features tend to be extracted from grayscale images directly and ignore the complementary information that is of great importance between color bands.

To exploit the complementary information and perform remote sensing image retrieval, CGWT and CGOT representations have been proposed based on Gabor filter and opponent process theory. The filtered images by Gabor filter with five scales and eight orientations are obtained first and then unichrome features, opponent features, and Gabor texture features are extracted. Finally, CGWT and CGOT representations are constituted and used in remote sensing image retrieval.

Considering the existence of semantic gap and some other difficulties, two similar evaluations, i.e., precision and precision-recall curves are chosen to evaluate the performance of all texture features. Results demonstrate that CGWT and CGOT perform better than GLCM, LBP, and wavelet texture, and CGOT not only improves the performance of some image classes using CGWT but also increases overall precision of all queried remote sensing images. In addition, a similarity measure for CGOT based on two existed distance measures has been defined. Compared with some widely used distance measures, the proposed similarity measure shows better performance.

In the future, the fusion mechanism of unichrome features and opponent features, Gabor texture and opponent features, as well as the influence of color space on proposed descriptors will be considered.

The author would like to thank Shawn Newsam for his LULC data set and the anonymous reviewers for their comments and corrections. This work was supported in part by National Science and Technology Specific Projects under Grant No. 2012YQ16018505 and National Natural Science Foundation of China under Grant No. 61172174.

Ojala  T., Pietikäinen  M., Harwood  D., “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognit.. 29, (1 ), 51 –59 (1996). 0031-3203 CrossRef
du Buf  J. H., Kardan  M., Spann  M., “Texture feature performance for image segmentation,” Pattern Recognit.. 23, (3 ), 291 –309 (1990). 0031-3203 CrossRef
Smeulders  A. W. et al., “Content-based image retrieval at the end of the early years,” IEEE Trans. Pattern Anal. Mach. Intell.. 22, (12 ), 1349 –1380 (2000). 0162-8828 CrossRef
Datta  R. et al., “Image retrieval: ideas, influences, and trends of the new age,” ACM Comput. Surv.. 40, (2 ), 5, :1 –5:60 (2008). 0360-0300 CrossRef
Haralick  R. M., Shanmugam  K., Dinstein  I. H., “Textural features for image classification,” IEEE Trans. Syst. Man Cybern.. SMC-3, (6 ), 610 –621 (1973). 1083-4427 CrossRef
Manjunath  B. S., Ma  W. Y., “Texture features for browsing and retrieval of image data,” IEEE Trans. Pattern Anal. Mach. Intell.. 18, (8 ), 837 –842 (1996). 0162-8828 CrossRef
Chang  T., Kuo  C. C., “Texture analysis and classification with tree-structured wavelet transform,” IEEE Trans. Image Process.. 2, (4 ), 429 –441 (1993). 1057-7149 CrossRef
Ojala  T., Pietikainen  M., Maenpaa  T., “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell.. 24, (7 ), 971 –987 (2002). 0162-8828 CrossRef
Lin  C. H., Chen  R. T., Chan  Y. K., “A smart content-based image retrieval system based on color and texture feature,” Image Vision Comput.. 27, (6 ), 658 –665 (2009). 0262-8856 CrossRef
Chun  Y. D., Kim  N. C., Jang  I. H., “Content-based image retrieval using multiresolution color and texture features,” IEEE Trans. Multimedia. 10, (6 ), 1073 –1084 (2008). 1520-9210 CrossRef
Liapis  S., Tziritas  G., “Color and texture image retrieval using chromaticity histograms and wavelet frames,” IEEE Trans. Multimedia. 6, (5 ), 676 –686 (2004). 1520-9210 CrossRef
Flickner  M. et al., “Query by image and video content: the QBIC system,” Computer. 28, (9 ), 23 –32 (1995). 0018-9162 CrossRef
Niblack  C. W. et al., “QBIC project: querying images by content, using color, texture, and shape,” Proc. SPIE. 1908, , 173 –187 (1993). 0277-786X CrossRef
Yue  J. et al., “Content-based image retrieval using color and texture fused features,” Math. Comput. Modell.. 54, (3 ), 1121 –1127 (2011). 0895-7177 CrossRef
Singha  M., Hemachandran  K., “Content based image retrieval using color and texture,” SIPIJ. 3, (1 ), 39 –57 (2012).CrossRef
Tsai  M. H. et al., “Color-texture-based image retrieval system using Gaussian Markov random field model,” Math. Prob. Eng.. 2009, , 1 –17 (2010). 1024-123X CrossRef
Mäenpää  T., Pietikäinen  M., “Classification with color and texture: jointly or separately,” Pattern Recognit.. 37, (8 ), 1629 –1640 (2004). 0031-3203 CrossRef
Hurvich  L. M., Jameson  D., “An opponent-process theory of color vision,” Psychol. Rev.. 64, (6 ), 384 –404 (1957). 0033-295X CrossRef
Jain  A., Healey  G., “A multiscale representation including opponent color features for texture recognition,” IEEE Trans. Image Process.. 7, (1 ), 124 –128 (1998). 1057-7149 CrossRef
Shi  M., Healey  G., “Hyperspectral texture recognition using a multiscale opponent representation,” IEEE Trans. Geosci. Remote Sens.. 41, (5 ), 1090 –1095 (2003). 0196-2892 CrossRef
Choi  J. Y., Ro  Y. M., Plataniotis  K. N., “Color local texture features for color face recognition,” IEEE Trans. Image Process.. 21, (3 ), 1366 –1380 (2012). 1057-7149 CrossRef
Liu  C., Wechsler  H., “Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition,” IEEE Trans. Image Process.. 11, (4 ), 467 –476 (2002). 1057-7149 CrossRef
Yang  Y., Newsam  S., “Geographic image retrieval using local invariant features,” IEEE Trans. Geosci. Remote Sens.. 51, (2 ), 818 –832 (2013). 0196-2892 CrossRef
Risojević  V., Babić  Z., “Fusion of global and local descriptors for remote sensing image classification,” IEEE Geosci. Remote Sens. Lett.. 10, (4 ), 836 –840 (2013). 1545-598X CrossRef
Aptoula  E., “Remote sensing image retrieval with global morphological texture descriptors,” IEEE Trans. Geosci. Remote Sens.. 52, (5 ), 3023 –3034 (2013). 0196-2892 CrossRef
Müller  H., Marchand-Maillet  S., Pun  T., “The truth about corel-evaluation in image retrieval,” in Image and Video Retrieval. , pp. 38 –49,  Springer ,  Berlin Heidelberg  (2002).CrossRef
Rubner  Y., Tomasi  C., Guibas  L. J., “The earth mover’s distance as a metric for image retrieval,” Int. J. Comput. Vision. 40, (2 ), 99 –121 (2000). 0920-5691 CrossRef

Zhenfeng Shao received his PhD degree from Wuhan University, China, in 2004. He is now a professor of the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are image retrieval, image fusion, and urban remote sensing application.

Weixun Zhou received his BS degree from Anhui University of Science and Technology, Anhui, China, in 2012. He is now working toward his master’s degree at the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are remote sensing image retrieval and image processing.

Lei Zhang received her BS degree from Xinyang Normal University, Henan, China, in 2011. She is currently working toward the PhD degree in State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. Her research interests include dimensionality reduction, hyperspectral classification, sparse representation, and pattern recognition in remote sensing images.

Jihu Hou received his BS degree from Hubei University, China, in 2012. He is now working toward his master’s degree at the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are remote sensing image retrieval, image processing, and GIS applications.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Zhenfeng Shao ; Weixun Zhou ; Lei Zhang and Jihu Hou
"Improved color texture descriptors for remote sensing image retrieval", J. Appl. Remote Sens. 8(1), 083584 (Jul 24, 2014). ; http://dx.doi.org/10.1117/1.JRS.8.083584


Figures

Graphic Jump LocationF1 :

Architecture of remote sensing image retrieval based on proposed descriptors.

Graphic Jump LocationF2 :

Image patches of eight land-use/land-cover classes used in our experiments, from left to right: (a) agricultural, (b) airplane, (c) beach, (d) buildings, (e) chaparral, (f) residential, (g) forest, and (h) harbor.

Graphic Jump LocationF3 :

Average precision of each image class with the proposed features and other texture features.

Graphic Jump LocationF4 :

Precision-recall curves for different features.

Graphic Jump LocationF5 :

One retrieval example of agricultural image: (a) query image, (b) performance of color Gabor opponent texture, (c) performance of color Gabor wavelet texture.

Tables

Table Grahic Jump Location
Table 1Comparisons of CGOT using different distance measures.

References

Ojala  T., Pietikäinen  M., Harwood  D., “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognit.. 29, (1 ), 51 –59 (1996). 0031-3203 CrossRef
du Buf  J. H., Kardan  M., Spann  M., “Texture feature performance for image segmentation,” Pattern Recognit.. 23, (3 ), 291 –309 (1990). 0031-3203 CrossRef
Smeulders  A. W. et al., “Content-based image retrieval at the end of the early years,” IEEE Trans. Pattern Anal. Mach. Intell.. 22, (12 ), 1349 –1380 (2000). 0162-8828 CrossRef
Datta  R. et al., “Image retrieval: ideas, influences, and trends of the new age,” ACM Comput. Surv.. 40, (2 ), 5, :1 –5:60 (2008). 0360-0300 CrossRef
Haralick  R. M., Shanmugam  K., Dinstein  I. H., “Textural features for image classification,” IEEE Trans. Syst. Man Cybern.. SMC-3, (6 ), 610 –621 (1973). 1083-4427 CrossRef
Manjunath  B. S., Ma  W. Y., “Texture features for browsing and retrieval of image data,” IEEE Trans. Pattern Anal. Mach. Intell.. 18, (8 ), 837 –842 (1996). 0162-8828 CrossRef
Chang  T., Kuo  C. C., “Texture analysis and classification with tree-structured wavelet transform,” IEEE Trans. Image Process.. 2, (4 ), 429 –441 (1993). 1057-7149 CrossRef
Ojala  T., Pietikainen  M., Maenpaa  T., “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell.. 24, (7 ), 971 –987 (2002). 0162-8828 CrossRef
Lin  C. H., Chen  R. T., Chan  Y. K., “A smart content-based image retrieval system based on color and texture feature,” Image Vision Comput.. 27, (6 ), 658 –665 (2009). 0262-8856 CrossRef
Chun  Y. D., Kim  N. C., Jang  I. H., “Content-based image retrieval using multiresolution color and texture features,” IEEE Trans. Multimedia. 10, (6 ), 1073 –1084 (2008). 1520-9210 CrossRef
Liapis  S., Tziritas  G., “Color and texture image retrieval using chromaticity histograms and wavelet frames,” IEEE Trans. Multimedia. 6, (5 ), 676 –686 (2004). 1520-9210 CrossRef
Flickner  M. et al., “Query by image and video content: the QBIC system,” Computer. 28, (9 ), 23 –32 (1995). 0018-9162 CrossRef
Niblack  C. W. et al., “QBIC project: querying images by content, using color, texture, and shape,” Proc. SPIE. 1908, , 173 –187 (1993). 0277-786X CrossRef
Yue  J. et al., “Content-based image retrieval using color and texture fused features,” Math. Comput. Modell.. 54, (3 ), 1121 –1127 (2011). 0895-7177 CrossRef
Singha  M., Hemachandran  K., “Content based image retrieval using color and texture,” SIPIJ. 3, (1 ), 39 –57 (2012).CrossRef
Tsai  M. H. et al., “Color-texture-based image retrieval system using Gaussian Markov random field model,” Math. Prob. Eng.. 2009, , 1 –17 (2010). 1024-123X CrossRef
Mäenpää  T., Pietikäinen  M., “Classification with color and texture: jointly or separately,” Pattern Recognit.. 37, (8 ), 1629 –1640 (2004). 0031-3203 CrossRef
Hurvich  L. M., Jameson  D., “An opponent-process theory of color vision,” Psychol. Rev.. 64, (6 ), 384 –404 (1957). 0033-295X CrossRef
Jain  A., Healey  G., “A multiscale representation including opponent color features for texture recognition,” IEEE Trans. Image Process.. 7, (1 ), 124 –128 (1998). 1057-7149 CrossRef
Shi  M., Healey  G., “Hyperspectral texture recognition using a multiscale opponent representation,” IEEE Trans. Geosci. Remote Sens.. 41, (5 ), 1090 –1095 (2003). 0196-2892 CrossRef
Choi  J. Y., Ro  Y. M., Plataniotis  K. N., “Color local texture features for color face recognition,” IEEE Trans. Image Process.. 21, (3 ), 1366 –1380 (2012). 1057-7149 CrossRef
Liu  C., Wechsler  H., “Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition,” IEEE Trans. Image Process.. 11, (4 ), 467 –476 (2002). 1057-7149 CrossRef
Yang  Y., Newsam  S., “Geographic image retrieval using local invariant features,” IEEE Trans. Geosci. Remote Sens.. 51, (2 ), 818 –832 (2013). 0196-2892 CrossRef
Risojević  V., Babić  Z., “Fusion of global and local descriptors for remote sensing image classification,” IEEE Geosci. Remote Sens. Lett.. 10, (4 ), 836 –840 (2013). 1545-598X CrossRef
Aptoula  E., “Remote sensing image retrieval with global morphological texture descriptors,” IEEE Trans. Geosci. Remote Sens.. 52, (5 ), 3023 –3034 (2013). 0196-2892 CrossRef
Müller  H., Marchand-Maillet  S., Pun  T., “The truth about corel-evaluation in image retrieval,” in Image and Video Retrieval. , pp. 38 –49,  Springer ,  Berlin Heidelberg  (2002).CrossRef
Rubner  Y., Tomasi  C., Guibas  L. J., “The earth mover’s distance as a metric for image retrieval,” Int. J. Comput. Vision. 40, (2 ), 99 –121 (2000). 0920-5691 CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.