Open Access
28 August 2013 Comparison between pixel- and object-based image classification of a tropical landscape using Système Pour l’Observation de la Terre-5 imagery
Hadi Memarian, Siva K. Balasundram, Raj Khosla
Author Affiliations +
Abstract
Based on the Système Pour l’Observation de la Terre-5 imagery, two main techniques of classifying land-use categories in a tropical landscape are compared using two supervised algorithms: maximum likelihood classifier (MLC) and K -nearest neighbor object-based classifier. Nine combinations of scale level (SL10, SL30, and SL50) and the nearest neighbor (NN3, NN5, and NN7) are investigated in the object-based classification. Accuracy assessment is performed using two main disagreement components, i.e., quantity disagreement and allocation disagreement. The MLC results in a higher total disagreement in total landscape as compared with object-based image classification. The SL30-NN5 object-based classifier reduces allocation error by 250% as compared with the MLC. Therefore, this classifier shows a higher performance in land-use classification of the Langat basin.

1.

Introduction

Decision making in each country or region needs adequate information on many complex interrelated aspects of its activities. Land use is desired as one such aspect and necessary knowledge about land use and land cover has become increasingly important.1 Classification of land use and land cover based on remotely sensed imagery can be partitioned into two general image analysis methods. The first approach is based on pixels, which has long been employed for classifying remotely sensed imagery. The second approach is based on objects, which has become increasingly common over the last decade.2,3

The conventional pixel-based classification techniques, such as maximum likelihood classifier (MLC), have been extensively used for the extraction of thematic information since the 1980s.4,5 MLC, the most established approach of image classification,6,7 assumes a normal Gaussian distribution of multivariate data. In this method, pixels are allocated to the most likely output class or allocated to a class based on a posterior probability of membership and dimensions equal to the number of bands in the original image.8 This requires users to carefully determine the classification scheme, so that each class follows a Gaussian distribution, and MLC ideally has to be performed at the spectral class level.7 Some examples of MLC application for land use and land cover classification include comparison of MLC and artificial neural network in USA using Landsat Thematic Mapper (TM) data,9 the same comparison in Turkey using Landsat TM data,10 an evaluation of fuzzy classifier and MLC using Landsat Enhanced TM+ (ETM+) data in Iran,11 and a comparison between object-oriented classification and MLC using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data in China.12

In terms of classification results, MLC is more suitable for remote-sensing imagery with medium and low spatial resolution, but it cannot exploit the full advantages of ground geometric structure and texture information contained in high-spatial-resolution imagery.12 The object-oriented classification method relies on the spectral characteristics of ground and makes further use of geometric and structural information.12,13 In this method, an object is a region of interest with spatial, spectral (brightness and color), and/or textural characteristics that define the region.14 Several studies have been conducted to compare object-based classification method with pixel-based techniques. For example, Yan et al.13 compared MLC with a K-nearest neighbor (K-NN) object-based analysis using ASTER imagery. They indicated that the overall accuracy of the object-based K-NN classification considerably outperformed the pixel-based MLC classification in Wuda, China (83.25% and 46.48%, respectively). A comparison between MLC and K-NN object-based classification method was performed using a decision tree approach based on high-spatial resolution digital airborne imagery.15 This study in northern California showed that K-NN object-based classification with one nearest neighbor (NN) gave a higher performance than MLC by 17%. Another comparison in Gettysburg, Pennsylvania, between MLC and K-NN object-based classifier was carried out by Platt and Rapoza16 using multispectral IKONOS imagery. They revealed that using expert knowledge the object-based K-NN classifier had the best overall accuracy of 78%. The application of object-based classifier using pan-sharpened Quickbird imagery in agricultural environments led to a higher accuracy (93.69%) than MLC application (89.6%) in southern Spain.17 Myint et al.18 used Quickbird imagery to classify urban land cover in Phoenix, Arizona. They compared MLC with a K-NN object-based classifier and concluded that object-based classifier with an overall accuracy of 90.4% outperformed MLC with an overall accuracy of 67.6%. In another study, application of object-based classifier on Système Pour l’Observation de la Terre (SPOT)-5 Panchromatic (PAN) imagery demonstrated more feasibility in the study site (Beijing Olympic Games Cottage) than conventional pixel-based approaches.5

The use of kappa index agreement19 along with “proportion correct” has become customary in remote-sensing literature for the purpose of accuracy assessment. Pontius20 and Pontius and Millones21 exposed some of the conceptual problems with the standard kappa and proposed a suite of variations on kappa to remedy the flaws of the standard kappa. Typically, the kappa statistic compares accuracy with a random baseline. According to Pontius and Millones,21 however, randomness is not a logical option for mapping. In addition, several kappa indices suffer from basic theoretical errors. Therefore, the standard kappa and its variants are more often than not complicated for computation, difficult to understand, and unhelpful to interpret.22,21 As such, in this study, two components of disagreement between classified and ground-truthed maps, in terms of the quantity and spatial allocation of the categories as suggested by Pontius and Millones,21 were employed.

This work was aimed at evaluating the capability of MLC and K-NN object-based classifier for land-use classification of the Langat basin that was captured on a SPOT-5 imagery.

2.

Materials and Methods

2.1.

Study Area

The Langat basin is located at the southern part of Klang Valley, which is the most urbanized river basin in Malaysia. In recent decades, the Langat basin has undergone rapid urbanization, industrialization, and agricultural development.23 The Langat basin is also a main source of drinking water for the surrounding areas, is a source of hydropower, and plays an important role in flood mitigation. Over the past four decades, the Langat basin has served 50% of the Selangor State population. Its average annual rainfall is 2400 mm. The basin has a rich diversity of landforms, surface features, and land cover.24,25

Due to the national importance of the Langat basin, a pilot region (upstream of the Langat river) with a total area of 111.17km2 was selected for land-use classification using a SPOT-5 imagery (Fig. 1).

Fig. 1

Study area represented by a false-color composite.

JARS_7_1_073512_f001.png

2.2.

Data Set

A system/map-corrected and pan-sharpened SPOT-5 image of the upstream area of Langat basin was acquired on September 20, 2006. SPOT-5 offers a resolution of 2.5 m in PAN mode and 10 to 20 m in multispectral mode. The multispectral mode comprises four bands. Corresponding wavelengths of each band are B1 (0.50 to 0.59 μm), B2 (0.61 to 0.68 μm), B3 (0.78 to 0.89 μm), and B4 (1.58 to 1.75 μm).26,27 The pan-sharpening procedure28 combines system/map-corrected multispectral image with PAN image to produce a high-resolution color image. Dark subtraction technique29 was applied for atmospheric scattering correction on the entire scene. Cloud cover was masked out during the classification process. Due to the higher spectral separability of signatures in 1-3-4 band combination, these layers were selected for further processing. The 2006 land-use map, obtained from the Department of Agriculture, Malaysia, was used as a reference for the definition of land-use classes and preparation of ground-truth maps. The study site is divided into 10 types of land use/cover which are (1) scrubland, (2) water bodies, (3) orchard, (4) urbanized/residential area, (5) rubber, (6) forest, (7) cleared lands, (8) grassland, (9) oil palm, and (10) paddy.

2.3.

Maximum Likelihood Classification

MLC uses the following discriminant function, which is maximized for the most likely class14,30,8:

Eq. (1)

gi(x)=ln(ac)[0.5ln(|covc|)][0.5(xMc)T(covc1)(xMc)],
where c is the class, x is the n-dimensional data (where n is the number of bands), ac is the probability with which class c occurs in the image and is assumed the same for all classes, covc is determinant of the covariance matrix of the data in class c, covc1 is the inverse of covc matrix, T denotes a vector transpose, and Mc is the mean vector of class c.

Jeffries–Matusita distance8 was applied to compute spectral separability between training site pairs with different land uses. This measure ranges from 0 to 2.0, and the pairs with the distance <1 infer low separability.14

2.4.

Image Segmentation

Segmentation, a fundamental first step in object-based image analysis,3 is the process of partitioning an image into segments by grouping neighboring pixels with similar feature values such as brightness, texture, and color. These segments ideally correspond to real-world objects.14 Environment for Visualizing Images EX employs an edge-based segmentation algorithm that is very fast and requires only one input parameter [scale level (SL)]. By suppressing weak edges to different levels, the algorithm can yield multiscale segmentation results from finer to coarser segmentation.14 The selection of an appropriate value for the SL is considered as the most important stage in object-based image analysis.3 SL is a measure of the greatest heterogeneity change when the two objects are merged, which is used as a threshold after calculation to terminate segmentation arithmetic.31,5 This value controls the relative size of the image objects, which has a direct impact on the classification accuracy of the final map.3,18 Generally, choosing a high SL causes fewer segments to be defined and choosing a low SL causes more segments to be defined.14 In this study, based on previous experiences and literature recommendations,32,5,15 three levels of scale factor, i.e., 10, 30, and 50, were used in image segmentation (Fig. 2).

Fig. 2

Image segmentation using different scale levels: (a) SL=10, (b) SL=30, and (c) SL=50.

JARS_7_1_073512_f002.png

2.5.

K-Nearest Neighbor Classification

The K-NN classifier considers the Euclidean distance in n-dimensional space of the target to the elements in the training data objects, where n is defined by the number of object attributes (i.e., spatial, spectral, or textural properties of a vector object) used during classification.14,33 The K-NN is generally more robust than a traditional nearest-neighbor classifier, since the K-nearest distances are used as a majority vote to determine which class the target belongs to.34,14,33 The K-NN is also much less sensitive to outliers and noise in the dataset, and generally produces a more accurate classification outcome when compared with traditional nearest-neighbor methods.14 The K parameter is the number of neighbors considered during classification. The ideal choice for K parameter depends on the selected dataset and the training data. Larger values tend to reduce the effect of noise and outliers, but they may cause inaccurate classification.14,33 In this article, K values of 3, 5, and 7 were examined in each SL. As such, the following nine combinations of SL and NN were investigated: SL10-NN3, SL10-NN5, SL10-NN7, SL30-NN3, SL30-NN5, SL30-NN7, SL50-NN3, SL50-NN5, and SL50-NN7.

2.6.

Accuracy Assessment

Ground-truth map was prepared based on the observed data (2006 land-use map) and field survey in about 10% of the total area. Disagreement parameters determine the disagreement between simulated and observed maps.22,21,35,36 Quantification error [quantity disagreement (QD)] happens when the quantity of cells of a category in the simulated map is different from the quantity of cells of the same category in the reference map. Location error [allocation disagreement (AD)] occurs when location of a class in the simulated map is different from location of that class in the reference map.21

2.6.1.

Disagreement Components

Table 1

Format of estimated population matrix (adapted from Ref. 21).

JARS_7_1_073512_f005.png

In reference to Table 1, J refers to the number of categories and number of strata in a typical stratified sampling design. Each category in the comparison map is indexed by i, which ranges from 1 to J. The number of pixels in each stratum is denoted by Ni. Each observation is recorded based on its category in the comparison map (i) and the reference map (j). The number of these observations is summed as the entry nij in row i and column j of the contingency matrix. Proportion of the study area (Pij), i.e., category i in the simulated map and category j in the observed map, is estimated by the following equation22,21:

Eq. (2)

pij=(nijj=1Jnij)(Nii=1JNi).

QD (qg) for an arbitrary category g is calculated as follows:

Eq. (3)

qg=|(i=1Jpig)(j=1Jpgj)|.

Overall QD, which incorporates all J categories, is calculated as follows:

Eq. (4)

QD=g=1Jqg2.

Calculation of AD (ag) for an arbitrary category g is shown in Eq. (5). The first argument within minimum function is the omission of category g, while the second argument is the commission of category g.

Eq. (5)

ag=2min[(i=1Jpig)pgg(j=1Jpgj)pgg].

Overall AD is calculated as follows:

Eq. (6)

AD=g=1Jag2.

Proportion of agreement (C) is calculated as follows:

Eq. (7)

C=j=1Jpjj.

Total disagreement (D), the sum of overall quantity of disagreement and overall allocation of disagreement, is computed as follows:

Eq. (8)

D=1C=QD+AD.

3.

Results and Discussion

Figure 3 illustrates the maps classified using MLC and SL30-NN5 (object-based classifier). Table 2 gives the disagreement components, calculated for each land-use category and the total landscape, based on MLC and object-based classification. QD and AD in the total landscape using MLC were 11.66% and 22.38%, respectively. In comparison with object-based image classifiers, MLC resulted in the lowest QD. Nevertheless, due to its highest AD, MLC resulted in a higher total disagreement in the total landscape. The ratio of QD to areal proportion (AP) and AD to AP of each land-use category gives a better inference about the contribution of each land acreage unit toward error production. From Table 2, paddy, oil palm, and grassland yielded the highest QD/AP using ML classifier, which indicates the lowest accuracy in terms of quantity of classified pixels. Scrubland, orchard, and oil palm yielded the highest AD/AP, which indicates the lowest accuracy in terms of location of the classified pixels. Among paired land-use categories, orchard/oil palm showed the lowest spectral separability with a Jeffries–Matusita distance of 0.1. As such, it would be challenging to discriminate between orchard and oil palm stands using MLC. The spectral separability between scrubland and paddy was also comparatively low, i.e., 0.7. Paired land-use categories with low spectral separability can be expected to demonstrate higher QDs and ADs.

Fig. 3

Maps classified using (a) maximum likelihood classifier (MLC) and (b) SL30-NN5.

JARS_7_1_073512_f003.png

Table 2

Accuracy assessments of different land-use categories derived using maximum likelihood classifier (MLC) and object-based classification.

Land useMLCObject-based classification
SL10-NN3SL10-NN5SL10-NN7SL30-NN3
QD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/AP
Scrubland0.100.033.591.111.460.451.140.351.900.590.710.221.950.600.640.201.730.531.010.31
Water bodies0.580.230.110.050.170.070.240.100.270.110.140.050.270.110.100.040.160.070.300.12
Orchard0.100.034.731.330.010.004.591.291.320.372.580.721.610.452.170.610.040.014.391.23
Urbanized/residential area0.780.132.220.361.240.201.140.181.440.230.960.161.520.250.930.151.580.261.090.18
Rubber7.100.3616.570.859.430.489.100.4610.750.557.530.3812.330.635.740.298.590.445.330.27
Forest10.200.1714.470.2410.710.1810.120.1713.680.237.200.1215.590.265.410.099.540.166.520.11
Cleared lands0.170.051.430.410.160.050.930.270.190.060.880.250.200.060.860.250.610.181.000.29
Grassland1.371.270.430.400.360.340.350.320.340.310.400.370.330.310.400.370.490.460.210.20
Oil palm1.562.850.911.660.440.790.130.230.440.810.150.270.490.890.080.150.430.780.150.27
Paddy1.363.110.290.670.260.590.160.360.300.690.130.290.330.750.090.220.290.650.090.20
Total11.6622.3812.1213.9415.3110.3317.308.2211.7310.05
Land useObject-based classification
SL30-NN5SL30-NN7SL50-NN3SL50-NN5SL50-NN7
QD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/APQD (%)QD/APAD (%)AD/AP
Scrubland2.340.720.400.122.440.750.320.101.620.501.390.432.490.770.340.102.470.760.560.17
Water bodies0.240.100.170.070.300.120.110.041.670.680.630.250.330.140.090.040.190.080.400.16
Orchard1.180.332.700.761.460.412.340.664.901.384.181.175.971.684.471.265.901.664.841.36
Urbanized/residential area1.770.290.930.151.880.310.830.144.080.660.950.154.630.750.810.134.800.780.730.12
Rubber10.120.523.450.1811.350.582.170.1117.270.880.670.0317.530.890.870.0417.600.900.710.04
Forest12.650.213.440.0614.230.241.960.034.990.087.880.135.760.106.710.1110.610.182.950.05
Cleared lands0.580.170.900.260.570.170.900.262.260.650.950.270.690.200.920.270.420.120.850.25
Grassland0.330.310.470.440.280.260.580.541.701.581.461.364.043.761.771.650.740.690.090.09
Oil palm0.440.800.140.250.480.880.070.130.340.630.180.340.350.640.270.500.330.590.300.55
Paddy0.340.770.080.180.370.840.050.110.360.820.040.080.400.910.020.050.410.930.010.03
Total15.006.3316.684.6619.599.1621.108.1321.735.73

Among the object-based image classifiers, SL30-NN5 showed the highest accuracy with a QD of 15% and an AD of 6.33% in the total landscape (Table 2). Using SL30-NN5, oil palm, paddy, and scrubland yielded the highest QD/AP, i.e., 0.8%, 0.77%, and 0.72%, respectively. Orchard and grassland with an AD/AP of 0.76% and 0.44%, respectively, yielded the highest allocation error.

SL30-NN5 resulted in spatial and/or spectral similarity on the image, which exerted some difficulty in accurately discriminating between rubber and forest, orchard and oil palm, and paddy and scrubland (Table 3). These results are supported by the Jeffries–Matusita distance values (Fig. 4). As indicated in Table 2, SL30-NN5 reduced allocation error by 250% as compared with MLC. However, MLC showed 22% improvement in quantity accuracy as compared to SL30-NN5.

Table 3

Spatial, spectral, and textural properties used in object-based image classification (extracted from the SL30-NN5-classified image).

ClassConvexityRoundnessElongationRect_FitAvgband_1Avgband_3Avgband_4Stdband_1Stdband_3Stdband_4Tx_RangeBandRatio
Scrubland1.040.611.600.69101.71135.99105.846.047.976.2432.160.12
Water bodies1.080.541.630.6382.0395.1692.417.7411.168.6433.380.01
Orchard1.080.571.590.6590.01132.9891.748.0311.318.8734.720.18
Urbanized/residential area1.130.561.610.65114.49139.36112.0915.1716.8615.4251.150.11
Rubber1.050.661.570.7467.41105.9171.733.565.754.1323.680.19
Forest1.070.641.630.7475.62113.6677.604.767.715.2527.140.18
Cleared lands1.100.541.660.64168.51176.00157.5521.7120.9621.5071.550.06
Grassland1.080.581.590.66101.56169.19103.765.689.016.2828.150.24
Oil palm1.030.621.600.7099.20146.4898.427.6110.028.1739.570.20
Paddy1.040.541.710.67144.56158.33139.107.838.757.5638.370.06
Notes: Rect_Fit: a shape measure that indicates how well the shape is described by a rectangle; Avgband_x: average value of pixels comprising the region in band x; Stdband_x: standard deviation value of pixels comprising the region in band x; Tx_Range: Average data range of pixels comprising the region inside the kernel; BandRatio:(B4−B3)/(B4+B3+eps).

Fig. 4

Pair separation based on the Jeffries–Matusita distance.

JARS_7_1_073512_f004.png

Results suggest that object-based classification, in comparison with pixel-based classification, offered a more realistic and accurate land-use map. This finding is in conformity with previous reports documented by Wang et al.,5 Yan et al.,13 Chen et al.,37 Gao et al.,38 and Myint et al.18

Despite the higher capability of object-oriented approach in image classification, differences in execution time between pixel- and object-based image analysis still remain an issue, especially for large areas.3 Future development of more quantitative methods for selecting optimal image segmentation parameters, especially at the SL as demonstrated by Costa et al.39 and Drăgut et al.,40 will hopefully reduce the required time for object-oriented classification.3

This work demonstrated the utility of disagreement components in validating land-use classification approaches, which has been confirmed by Memarian et al.22 and Pontius and Millones.21

Based on the results obtained in this study and previous investigations on object-based image classification reported by Yu et al.,15 Platt and Rapoza,16 and Duro et al.,3 the following refinements are recommended for future work in obtaining a more precise land-use map:

  • 1. Use of a robust tool such as support vector machine for object-oriented classification and optimization algorithm (i.e., interval-based attribute ranking) for advanced attribute selection in object-based classification. Expert classifiers, such as artificial neural networks, have more capability in training as compared with conventional classifiers.

  • 2. Use of ancillary data such as digital elevation model, slope, and normalized difference vegetation index in object-oriented classification. Ancillary data act as added attributes to current characteristics for executing a more sophisticated classification.

4.

Conclusion

In comparison with object-based image classification, the MLC resulted in a higher total disagreement in total landscape. Image classification employing the MLC yielded a high ratio of QD to AP in land-use categories such as paddy, oil palm, and grassland and consequently low accuracy in terms of quantity of classified pixels. Meanwhile, categories such as scrubland, orchard, and oil palm, which showed a high ratio of AD to AP, registered low accuracy in terms of location of classified pixels. These results were supported by low separation distance between paired classes. Object-based image classifier with the SL of 30 and the K-value of 5 (SL30-NN5) showed the highest classification accuracy. Using the SL30-NN5, oil palm, paddy, and scrubland yielded high QD/AP values, while orchard and grassland showed the highest allocation error. Nevertheless, SL30-NN5 resulted in spatial and/or spectral similarity that caused difficulty in discriminating between rubber and forest, orchard and oil palm, and paddy and scrubland. Evidently, SL30-NN5 reduced allocation error by 250% as compared with MLC. However, MLC showed 22% improvement in quantity accuracy as compared with SL30-NN5.

This work has demonstrated higher performance and utility of object-based classification over the traditional pixel-based classification in a tropical landscape, i.e., Malaysia’s Langat basin.

Acknowledgments

The authors gratefully acknowledge Universiti Putra Malaysia for procuring land-use maps and satellite imagery and Mr. Hamdan Md Ali (ICT Unit, Faculty of Agriculture, Universiti Putra Malaysia) for hardware and software assistance.

References

1. 

J. R. Andersonet al., “A land use and land cover classification system for use with remote sensor data,” (2012) http://landcover.usgs.gov/pdf/anderson.pdf October ). 2012). Google Scholar

2. 

T. Blaschke, “Object based image analysis for remote sensing,” ISPRS J. Photogramm. Rem. Sens., 65 (1), 2 –16 (2010). http://dx.doi.org/10.1016/j.isprsjprs.2009.06.004 IRSEE9 0924-2716 Google Scholar

3. 

D. C. DuroS. E. FranklinM. G. Dube, “A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery,” Rem. Sens. Environ., 118 259 –272 (2012). http://dx.doi.org/10.1016/j.rse.2011.11.020 RSEEA7 0034-4257 Google Scholar

4. 

A. Singh, M. J. EdenJ. T. Parry, “Change detection in the tropical forest environment of northeastern India using Landsat,” Remote Sensing and Land Management, 237 –253 John Wiley & Sons, London (1986). Google Scholar

5. 

Z. Wanget al., “Object-oriented classification and application in land use classification using SPOT-5 PAN imagery,” in Geosci. Rem. Sens. Symp., 3158 –3160 (2004). Google Scholar

6. 

J. Jensen, Introductory Digital Image Processing: A Remote Sensing Perspective, 3rd ed.Prentice Hall, Upper Saddle River, NJ (2005). Google Scholar

7. 

B. W. SzusterQ. ChenM. Borger, “A comparison of classification techniques to support land cover and land use analysis in tropical coastal zones,” Appl. Geogr., 31 (2), 525 –532 (2011). Google Scholar

8. 

J. A. RichardsX. Jia, Remote Sensing Digital Image Analysis, 4th ed.Springer-Verlag, Berlin, Heidelberg (2006). Google Scholar

9. 

J. D. PaolaR. A. Schowengerdt, “A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification,” IEEE Trans. Geosci. Rem. Sens., 33 (4), 981 –996 (1995). http://dx.doi.org/10.1109/36.406684 IGRSD2 0196-2892 Google Scholar

10. 

F. S. ErbekC. ÖzkanM. Taberner, “Comparison of maximum likelihood classification method with supervised artificial neural network algorithms for land use activities,” Int. J. Rem. Sens., 25 (9), 1733 –1748 (2004). http://dx.doi.org/10.1080/0143116031000150077 IJSEDK 0143-1161 Google Scholar

11. 

A. AkbarpourM. B. SharifiH. Memarian, “The comparison of fuzzy and maximum likelihood methods in preparing land use layer using ETM+ data (Case study: Kameh watershed),” Iran. J. Range Desert Res., 15 (3), 304 –319 (2006). Google Scholar

12. 

J. LiX. LiJ. Chen, “The study of object-oriented classification method of remote sensing image,” in Proc. 1st Int. Conf. Information Science and Engineering (ICISE2009), 1495 –1498, (2009). Google Scholar

13. 

G. Yanet al., “Comparison of pixel-based and object-oriented image classification approaches—a case study in a coal fire area, Wuda, Inner Mongolia, China,” Int. J. Rem. Sens., 27 (18), 4039 –4055 (2006). http://dx.doi.org/10.1080/01431160600702632 IJSEDK 0143-1161 Google Scholar

14. 

ITT Visual Information Solutions, ENVI Help System, USA (2010). Google Scholar

15. 

Q. Yuet al., “Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery,” Photogramm. Eng. Rem. Sens., 72 (7), 799 –811 (2006). Google Scholar

16. 

R. V. PlattL. Rapoza, “An evaluation of an object-oriented paradigm for land use/land cover classification,” Prof. Geogr., 60 (1), 87 –100 (2008). http://dx.doi.org/10.1080/00330120701724152 0033-0124 Google Scholar

17. 

I. J. Castillejo-Gonzálezet al., “Object- and pixel-based analysis for mapping crops and their agro-environmental associated measures using QuickBird imagery,” Comput. Electron. Agric., 68 (2), 207 –215 (2009). http://dx.doi.org/10.1016/j.compag.2009.06.004 CEAGE6 0168-1699 Google Scholar

18. 

S. W. Myintet al., “Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery,” Rem. Sens. Environ., 115 (5), 1145 –1161 (2011). http://dx.doi.org/10.1016/j.rse.2010.12.017 RSEEA7 0034-4257 Google Scholar

19. 

J. Cohen, “A coefficient of agreement for nominal scales,” Educ. Psychol. Meas., 20 (1), 37 –46 (1960). http://dx.doi.org/10.1177/001316446002000104 EPMEAJ 0013-1644 Google Scholar

20. 

R. G. Pontius Jr., “Quantification error versus location error in comparison of categorical maps,” Photogramm. Eng. Rem. Sens., 66 (8), 1011 –1016 (2000). Google Scholar

21. 

R. G. Pontius Jr.M. Millones, “Death to kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment,” Int. J. Rem. Sens., 32 (15), 4407 –4429 (2011). http://dx.doi.org/10.1080/01431161.2011.552923 IJSEDK 0143-1161 Google Scholar

22. 

H. Memarianet al., “Validation of CA-Markov for simulation of land use and cover change in the Langat Basin, Malaysia,” J. Geogr. Inf. Syst., 4 (6), 542 –554 (2012). http://dx.doi.org/10.4236/jgis.2012.46059 IJGSE3 0269-3798 Google Scholar

23. 

H. Memarianet al., “Hydrologic analysis of a tropical watershed using KINEROS2,” Environ. Asia, 5 (1), 84 –93 (2012). Google Scholar

24. 

H. Memarianet al., “Trend analysis of water discharge and sediment load during the past three decades of development in the Langat Basin, Malaysia,” Hydrol. Sci. J., 57 (6), 1207 –1222 (2012). http://dx.doi.org/10.1080/02626667.2012.695073 HSJODN 0262-6667 Google Scholar

25. 

H. Memarianet al., “KINEROS2 application for LUCC impact analysis at the Hulu Langat Basin, Malaysia,” Water Environ. J., (2012). http://dx.doi.org/10.1111/wej.12002 WEJAAB 1747-6585 Google Scholar

26. 

R. A. Schowengerdt, Remote Sensing Models and Methods for Image Processing, 3rd ed.Elsevier, Oxford, UK (2007). Google Scholar

27. 

J. YangY. Wang, “Classification of 10 m-resolution SPOT data using a combined Bayesian network classifier-shape adaptive neighborhood method,” ISPRS J. Photogramm. Rem. Sens., 72 36 –45 (2012). http://dx.doi.org/10.1016/j.isprsjprs.2012.05.011 IRSEE9 0924-2716 Google Scholar

28. 

LabenC. A.BrowerB. V., “Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening,” United States Eastman Kodak Company, Rochester, New York, US Patent 6011875 (2000).

29. 

P. S. Chavez, “An improved dark-object subtraction technique for atmospheric scattering correction of multi-spectral data,” Rem. Sens. Environ., 24 (3), 459 –479 (1988). http://dx.doi.org/10.1016/0034-4257(88)90019-3 RSEEA7 0034-4257 Google Scholar

30. 

G. Rees, The Remote Sensing Data Book, Cambridge University Press, Cambridge (1999). Google Scholar

31. 

L. DurieuxE. LagabrielleA. Nelson, “A method for monitoring building construction in urban sprawl areas using object-based analysis of Spot 5 images and existing GIS data,” ISPRS J. Photogramm. Rem. Sens., 63 (4), 399 –408 (2008). http://dx.doi.org/10.1016/j.isprsjprs.2008.01.005 IRSEE9 0924-2716 Google Scholar

32. 

R. MathieuJ. AryalA. K. Chong, “Object-based classification of Ikonos imagery for mapping large-scale vegetation communities in urban areas,” Sensors, 7 (11), 2860 –2880 (2007). http://dx.doi.org/10.3390/s7112860 SNSRES 0746-9462 Google Scholar

33. 

J. KimB. KimS. Savarese, “Comparison of image classification methods: K-nearest neighbor and support vector machines,” in Proc. 6th WSEAS Int. Conf. Circuits, Systems, Signal and Telecommunications, 133 –138 (2012). Google Scholar

34. 

T. CoverP. Hart, “Nearest-neighbor pattern classification,” 21 –27 (1967). Google Scholar

35. 

R. G. Pontius Jr.et al., “Comparing the input, output, and validation maps for several models of land change,” Ann. Reg. Sci., 42 (1), 11 –37 (2008). http://dx.doi.org/10.1007/s00168-007-0138-2 0570-1864 Google Scholar

36. 

R. G. Pontius Jr.S. PeethambaramJ. C. Castella, “Comparison of three maps at multiple resolutions: a case study of land change simulation in Cho Don District, Vietnam,” Ann. Assoc. Am. Geogr., 101 (1), 45 –62 (2011). http://dx.doi.org/10.1080/00045608.2010.517742 AAAGAK 0004-5608 Google Scholar

37. 

M. Chenet al., “Comparison of pixel-based and object-oriented knowledge-based classification methods using SPOT5 imagery,” in Proc. WSEAS Transactions on Information Science and Applications, 477 –489 (2009). Google Scholar

38. 

Y. GaoJ. F. MasA. Navarrete, “The improvement of an object-oriented classification using multi-temporal MODIS EVI satellite data,” Int. J. Digit. Earth, 2 (3), 219 –236 (2009). http://dx.doi.org/10.1080/17538940902818311 Google Scholar

39. 

P. G. A. O. Costaet al., “Genetic adaptation of segmentation parameters,” Object-Based Image Analysis, 679 –695 Springer, Berlin, Heidelberg (2008). Google Scholar

40. 

L. DrăgutD. TiedeS. R. Levick, “ESP: a tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data,” Int. J. Geogr. Inf. Sci., 24 (6), 859 –871 (2010). http://dx.doi.org/10.1080/13658810903174803 1365-8816 Google Scholar
CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hadi Memarian, Siva K. Balasundram, and Raj Khosla "Comparison between pixel- and object-based image classification of a tropical landscape using Système Pour l’Observation de la Terre-5 imagery," Journal of Applied Remote Sensing 7(1), 073512 (28 August 2013). https://doi.org/10.1117/1.JRS.7.073512
Published: 28 August 2013
Lens.org Logo
CITATIONS
Cited by 11 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image classification

Image segmentation

Stereolithography

Agriculture

Accuracy assessment

Earth observing sensors

Image analysis

Back to Top