Open Access
18 November 2019 Decomposition of mixed pixels in MODIS data using Bernstein basis functions
Yi Qin, Feng Guo, Yupeng Ren, Xin Wang, Juan Gu, Jingyu Ma, Lejun Zou, Xiaohua Shen
Author Affiliations +
Abstract

The decomposition of mixed pixels in Moderate Resolution Imaging Spectroradiometer (MODIS) images is essential for the application of MODIS data in many fields. Many existing methods for unmixing mixed pixels use principal component analysis to reduce the dimensionality of the image data and require the extraction of endmember spectra. We propose the pixel spectral unmixing index (PSUI) method for unmixing mixed pixels in MODIS images. In this method, a set of third-order Bernstein basis functions is applied to reduce the dimensionality of the image data and characterize the spectral curves of the mixed pixels in a MODIS image, and then the derived PSUIs (i.e., the coefficients of the basis functions) are calibrated by means of the abundance values of the ground features from the Landsat Enhanced Thematic Mapper Plus (ETM+)/Operational Land Imager (OLI) classification images corresponding to the date and region of the MODIS image. The proposed method was tested on MODIS and ETM+/OLI images, and it obtained satisfying unmixing results. We compared the PSUI method with conventional methods, including the pixel purity index, the N-finder algorithm, the sequential maximum angle convex cone, and vertex component analysis and found that the PSUI method outperformed the other four methods.

1.

Introduction

Moderate Resolution Imaging Spectroradiometer (MODIS), as well as later-developed hyperspectral sensors have made great breakthroughs in spectral channel settings compared with earlier remote sensors. There are 36 discrete channels, including 20 reflective spectral channels, in a MODIS image, and each pixel of the image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model, which makes it possible to accurately depict the spectrum characteristics of typical ground features using not only the wavelengths, ranges, and intensities of the peaks and valleys but also the integral area that is in the range enclosed by the spectral reflectance curves of the ground features and the x-axis (in Cartesian coordinates). The MODIS visits the globe once or twice per day with coarse resolution of 250 to 1000 m. However, the spatial resolution of MODIS images is not high enough to clearly distinguish different ground features. In many cases, a MODIS pixel is a mixed pixel that is covered by multiple land cover types, which has a significant influence on the information that can be derived.1,2 Thus, the decomposition of mixed pixels in MODIS images is critically important for the application of MODIS data in many fields, such as mapping land cover distributions,3 evaluating vegetation/soil fractional cover,46 monitoring and evaluating karst rocky desertification,7 flood mapping,8,9 and retrieving fire temperature and area.10

The spectral characteristics of ground features are the basis not only for identifying them in remote sensing images but also for decomposing mixed pixels in images. The decomposition of mixed pixels is generally based on a linear spectral mixture model (LSMM) or a nonlinear spectral mixture model (NLSMM).11 Although the NLSMM is more applicable when the multiple scattering among distinct endmembers is not negligible,12 such as in intimate mineral mixtures and vegetation canopies,13 the LSMM is a mature and more widely used technique than the NLSMM.14,15

To apply existing methods for decomposing mixed pixels, the endmembers must be obtained. Endmember extraction is the process of selecting a collection of pure signature spectra of ground features present in a remote sensing image.1618 The corresponding abundance of each endmember is usually estimated by using the fully constrained least squares (FCLS) method based on the LSMM.19 The endmember extraction is generally performed in two ways: (1) by deriving them directly from the remote sensing images, which is referred to as image endmember analysis;1 or (2) from a spectral library that contains the spectra of known target features measured in the field or laboratory, which is referred to as library endmember analysis.20 When considering the effect factors, such as the atmospheric interaction and remote sensor peculiarities and noise, image endmember analysis is now most widely used. Two major approaches are used to extract endmembers based on the LSMM. One approach uses geometrical methods, including the pixel purity index (PPI),21 the N-finder algorithm (N-FINDR),22 the sequential maximum angle convex cone (SMACC),23 vertex component analysis (VCA),24 etc., of which the PPI and SMACC methods are widely used for decomposing mixed pixels in remote sensing images due to their publicity and availability in the Environment for Visualizing Images software.25 Another approach uses statistical methods, such as independent component analysis.26

It is usually difficult to acquire pure pixels in a MODIS image because of its spatial resolution limit. Many researchers have suggested that there are no pure pixels in remote sensing images with low spatial resolution.17,27,28 Some authors have tried to use nonnegative matrix factorization (NMF) for hyperspectral data unmixing.29,30 Miao and Qi31 presented a constrained NMF (MVC-NMF) method without the pure-pixel assumption for unsupervised endmember extraction from highly mixed image data.

The accuracy of extracted endmembers has a great impact on the unmixing accuracy. To assure unmixing accuracy, an unmixing method for MODIS data that does not resort to extracting endmember spectra is taken into account.

Adjacent channels in multispectral/hyperspectral imagery have good correlation and often contain similar information, which produces redundancies in a multispectral/hyperspectral dataset.32,33 Thus, many conventional unmixing methods, e.g., PPI,21 manual endmember selection tool,32 N-FINDR,22 spectral mixture analysis based on simulated annealing,34 VCA,24 simplex growing algorithm,35 Gaussian elimination method,36 etc., use statistical techniques such as principal component analysis (PCA) to reduce the dimensionality of the image data for both computational time saving and signal-to-noise improvement. Then, a set of uncorrelated variables (principal components) are generated, and those containing the most information from the original bands are selected to extract endmember spectra. Each endmember spectrum can be constructed as a linear combination of the principal components.32 As a statistical technique, the PCA transformation is highly dependent on the numerical characteristics of the image. Hence, the principal components vary with the images, and the difficulty of interpreting a priori the content of the principal components is an inherent problem of PCA.33,37

A set of basis functions are independent of each other as well as principal components, and they are purely theoretical functions. In mathematics, a complex curve can be represented as a linear combination of a set of basis functions.38,39 Similarly, the spectral curve made by mixing spectra with more than one ground cover type can also be represented as a linear combination of a set of basis functions. The basis functions can be employed to reduce the dimensionality of the image data and characterize the spectral curve of each pixel without redundant information. A comparison of basis functions with the principal components generated by using PCA shows that on one hand, the basis functions can be used to depict each endmember spectrum with a linear combination as well as the principal components do. On the other hand, there exists the difference that the basis functions are invariant and independent of image data. Thus, the coefficients of the basis functions for pixels in different images are comparable, and the coefficients can be employed to depict the spectral curves of mixed pixels with various combinations of ground feature abundance fractions. Thus, to ensure unmixing accuracy, an unmixing method for MODIS data based on a set of basis functions, which does not resort to extracting endmember spectra, is proposed and tested in our study.

This study exploits a set of third-order Bernstein basis functions to construct the pixel spectral unmixing indexes (PSUIs), i.e., the coefficients of the basis functions, for a MODIS image without resort to extracting endmember spectra, and then a higher spatial resolution image, such as a Landsat Enhanced Thematic Mapper Plus (ETM+)/Operational Land Imager (OLI) image from the same region and same day with the MODIS image, is utilized to calibrate these indexes, which then creates a calibration model. The calibration model indicates the relationship between the PSUIs and the component abundances and thus can be used for calculating the abundances of the mixed pixel’s components in MODIS images. This method was tested on MODIS and ETM+/OLI images in different scenes or at different times and was compared with other methods, such as the PPI, the N-FINDR, the SMACC, and VCA.

2.

Methodology

2.1.

Bezier Curve and Bernstein Basis Functions

Given a set of control points, Pi, i=0,1,,n, its n’th-order Bezier curve is defined as

Eq. (1)

P(t)=i=0nPiBi,n(t),t[0,1],
where Pi is the control point, and Bi,n(t) is known as the n’th-order Bernstein basis function.40

For the n’th-order Bernstein basis function, the expansion terms of the binomial expression 1=[t+(1t)]n are defined as

Eq. (2)

Bi,n(t)=(ni)ti(1t)ni,t[0,1]andi=0,1,,n.

When n=3, it is known as a Bernstein basis function of order 3 (see Fig. 1), which may be defined as

Eq. (3)

B0,3(t)=3!0!(30)!t0(1t)30=(1t)3,B1,3(t)=3!1!(31)!t1(1t)31=3t(1t)2,B2,3(t)=3!2!(32)!t2(1t)32=3t2(1t),B3,3(t)=3!3!(33)!t3(1t)33=t3.

Fig. 1

Curves of third-order Bernstein basis functions (see the text).

JARS_13_4_046509_f001.png

In a plane or in a higher-dimensional space, the explicit form of this cubic Bezier curve with four control points can be written as

Eq. (4)

P(t)=P0B0,3(t)+P1B1,3(t)+P2B2,3(t)+P3B3,3(t),t[0,1].

2.2.

Calculation of Pixel Spectral Unmixing Indexes

2.2.1.

Calculation principle

To calculate the PSUIs for a MODIS image, a set of samples needs to be taken from the image. Here, a MODIS image of the Pearl River Delta region of China was used [MOD021KM: level 1b calibrated, 1000×1000  m spatial resolution, date: 2001324, time: 03:10, derived from the Level-1 and Atmosphere Archive and Distribution System (LAADS) Distributed Active Archive Center (DAAC)]. Acquiring pure pixels containing only one ground object from a MODIS image is difficult because of its spatial resolution limit, but it is possible for each sampling pixel to be dominated by only one category of ground features. For the convenience of discussion, in this paper, such sampling points are called pseudo-MODIS pure pixels, and the ground features estimated from the pseudo-pure pixels are called quasiground features. There are four main kinds of spectral reflectance curves for quasiground features (e.g., water body, sediment-laden water, vegetation, and bare soil) derived from a MODIS image.

Figure 2 shows the spectral reflectance curves of the four types of quasiground features obtained from the sampling points for the above-mentioned MODIS image (a total of 280 samples, each category accounts for a quarter of the total sampling points) and the spectral reflectance curve of a random mixed pixel in the MODIS image, where the original reflectance curve obtained from the MODIS data has been normalized with respect to the total area enclosed by the curve and the x-axis. Normalization offers the advantage that it can reduce statistical fluctuations without losing any information. Each curve in Fig. 2 contains 13-channel reflectance data points distributed in a wavelength range from 405 to 2155 nm. Channels 13 to 18 and 26 are not used in Fig. 2, because channels 13 to 16 and 26 are invalid on land and the wavelength ranges of channels 17 and 18 overlap with that of channel 19. According to the spectral reflectance curves of the quasiground features derived from MODIS data shown in Fig. 2(a), different quasiground features reach high reflectance in different channels, e.g., water body in blue–green channels, sediment-laden water in red channels, vegetation in near-infrared (with shorter wavelengths) channels, and bare soil in near-infrared (with longer wavelengths) channels. The peak feature of spectral reflectance curves is important for identifying different ground features. Based on this point, the reflectance data on each spectral curve can be divided into four groups [see Fig. 2(a)], including G0 at wavelengths from 405 to 565 nm, G1 at wavelengths from 620 to 876 nm, G2 at wavelengths from 915 to 1250 nm, and G3 at wavelengths from 1628 to 2155 nm. This way of grouping reflectance data guarantees that high reflectance of the quasiground features appears in different groups. In addition, according to the property of Bernstein basis functions, Bi,n(t) reaches a maximum when ti=i/n, which means that the peaks of different basis functions appear at different t-values. Both the Bernstein basis function curves and the spectral curves of the ground features have evident peak features. Thus, the third-order Bernstein basis functions with four curves (Fig. 1) are used to characterize the spectral signatures of mixed pixels in MODIS data by using their coefficients.

Fig. 2

(a) Spectral reflectance curves of four types of quasiground features that are derived from a MODIS image (280 samples, each category accounts for a quarter of the total sampling points). G0, G1, G2, and G3 are groupings of the spectral reflectance data. (b) Spectral curve of a mixed pixel in a MODIS image. Gray-shaded areas with the names S0, S1, S2, and S3 were used to show the spectral integral areas corresponding to these four groups of the mixed pixel. Red rectangles below the horizontal axis indicate the locations of MODIS channels.

JARS_13_4_046509_f002.png

A cubic Bezier curve from a linear combination of the third-order Bernstein basis functions consists of innumerable data points, whereas a spectral reflectance curve derived from MODIS data consists of 13 data points. Consequently, the spectral reflectance curve of each mixed pixel should be mapped to a cubic Bezier curve before employing the third-order Bernstein basis functions to characterize the spectral curve with their coefficients. A cubic Bezier curve mapped to the mixed spectral curve can be expressed as

Eq. (5)

f:  F(λ)P(t),
where F(λ) represents the spectral curve of a mixed pixel, and P(t) represents the mapped cubic Bezier curve, which is determined by four control points.

Here, the spectral integral area (S0, S1, S2, and S3), which is in the range enclosed by the spectral curve for each group and the x-axis [see Fig. 2(b)], and the t-value (ti=i/n, i=0, 1, 2, 3 and n=3) are used together to generate four data points for the mapped cubic Bezier curve. The spectral integral area, which is a combination of sequentially related channels, is employed to replace the single reflectance value. This is because data points generated by 4 of the 15 valid channels of the MODIS sensor cannot fully reflect information about channel width and interrelation, whereas data points generated by the spectral integral areas can do so. Thereafter, four control points can be determined by these four data points. Thus, the cubic Bezier curve is determined.

The components in the LSMM are endmembers with physical meaning, and the abundances are nonnegative. The components in Eq. (4) are third-order Bernstein basis functions, namely B0,3(t), B1,3(t), B2,3(t), and B3,3(t), which have exact shapes. The coefficients in Eq. (4), namely P0, P1, P2, and P3, express the content of the four basic functions for the mixed spectrum and can be positive or negative. Because the geometric shapes of the four basic functions are invariant, P0, P1, P2, and P3 can objectively describe the complex spectral curves of mixed pixels in MODIS images. Here, these coefficients are called PSUIs.

2.2.2.

Calculation process

There are four steps used to generate PSUIs (P0, P1, P2, and P3) for each mixed pixel in a MODIS image.

  • Step 1: Divide the spectral reflectance data of each pixel in the MODIS data into four groups (G0, G1, G2, and G3) according to the peak locations of the spectral curves of the four types of quasiground features [see Fig. 2(a)].

  • Step 2: Calculate the spectral integral areas corresponding to these four groups for each pixel as S0, S1, S2, and S3, respectively.

    S0=[(R8+R9)(λ9λ8)+(R9+R3)(λ3λ9)+(R3+R10)(λ10λ3)+(R10+R11)(λ11λ10)+(R11+R12)(λ12λ11)+(R12+R4)(λ4λ12)]/2,

    Eq. (6)

    S1=(R1+R2)(λ2λ1)/2,S2=(R19+R5)(λ5λ19)/2,S3=(R6+R7)(λ7λ6)/2,
    where Ri represents the reflectance (%) at the i’th channel of a pixel, and λi represents the central wavelength (nm) of the i’th channel:

    Eq. (7)

    S=S0+S1+S2+S3,S0=S0/S,S1=S1/S,S2=S2/S,S3=S3/S.

    To reduce statistical fluctuations without losing any information, S0, S1, S2, and S3 are normalized to be dimensionless [Eq. (7)]. Hereinafter, S0, S1, S2, and S3 represent the normalized values of the spectral integral areas, respectively.

  • Step 3: According to the property of Bernstein basis functions that Bi,n(t) reaches a maximum when ti=i/n, and considering the importance of the peak feature of spectral curves, we set t0=0, t1=1/3, t2=2/3, and t3=1; then, four data points of a cubic Bezier curve are generated as (t0,S0), (t1,S1), (t2,S2), and (t3,S3).

  • Step 4: Substituting t=ti, P(t)=Si, i=0,1,2,3 into Eq. (4), we get

    Eq. (8)

    S0=P0B0,3(t0)+P1B1,3(t0)+P2B2,3(t0)+P3B3,3(t0),S1=P0B0,3(t1)+P1B1,3(t1)+P2B2,3(t1)+P3B3,3(t1),S2=P0B0,3(t2)+P1B1,3(t2)+P2B2,3(t2)+P3B3,3(t2),S3=P0B0,3(t3)+P1B1,3(t3)+P2B2,3(t3)+P3B3,3(t3).

Solving Eq. (8) for P0, P1, P2, and P3, we have

Eq. (9)

P0=S0,P1=(18×S19×S25×S0+2×S3)/6,P2=(18×S29×S15×S3+2×S0)/6,P3=S3.

Figure 3 shows the flowchart for employing the third-order Bernstein basis functions to characterize the spectral signatures of mixed pixels in MODIS data by using their coefficients (PSUIs).

Fig. 3

Flowchart for characterizing the spectral signatures of mixed pixels in MODIS data by using cubic Bezier curves.

JARS_13_4_046509_f003.png

Here, a Terra MODIS image (date: 2001324, time: 03:10) of the Pearl River Delta region of China was taken as an illustrative example of decomposing mixed pixels. After preprocessing the MODIS image (e.g., geometric correction and cloud masking41), the PSUIs were obtained using Eq. (9) for the mixed pixels (Fig. 4). Figure 4(a) presents the pseudocolor image derived from channels 7, 2, and 1 of the MODIS data. Figure 4(b) shows the distribution of the normalized difference water index (NDWI),42 which is used to evaluate the water distribution information in remote sensing applications,43,44 whereas the normalized difference vegetation index (NDVI) is usually used to evaluate green coverage and vegetation growth [Fig. 4(c)]. Figure 4(d) shows the distribution of the normalized difference soil index (NDSI),45 which is used to enhance soil information. The PSUIs, namely P0, P1, P2, and P3, are shown in Figs. 4(e)4(h), respectively. The index P0, which mainly reflects the distribution information for the B0,3(t) function, can be used to identify the distribution of water, as NDWI does. The correlation coefficient between P0 and NDWI is 0.98. The index P2, which reflects the distribution information for the B2,3(t) function, may be used to estimate vegetation growth and to evaluate green coverage, as NDVI does. The correlation coefficient between P2 and NDVI is 0.94. The index P3, which reflects the distribution information for the B3,3(t) function, can be applied to estimate the distribution of bare soil or outcropped areas, as NDSI does. The correlation coefficient between P3 and NDSI is 0.98. The index P1, as the coefficient of the B1,3(t) function, can be correlated well with sediment-laden water, and it may have a potential application in estimating the sediment content of water. Figure 4 reveals that the B0,3(t), B1,3(t), B2,3(t), and B3,3(t) functions can reflect information about water body, sediment-laden water, vegetation, and bare soil, respectively, by their coefficients (P0, P1, P2, and P3). Thus, the third-order Bernstein basis functions can be employed to characterize the spectral curves of mixed pixels in MODIS data with physical meaning, which is superior to principal components.

Fig. 4

(a) Pseudocolor image derived from MODIS data of the Pearl River Delta region of China (date: 2001324, time: 03:10. RGB: bands 7, 2, and 1); (b) NDWI from the MODIS data; (c) NDVI from the MODIS data; and (d) NDSI from the MODIS data. PSUIs derived from the MODIS data: (e) P0, (f) P1, (g) P2, and (h) P3.

JARS_13_4_046509_f004.png

2.3.

Abundance Calculation Based on the Calibration Model

The PSUIs P0, P1, P2, and P3, which are derived from a MODIS image by adopting Eq. (9), indicate the spectral signals from water body, sediment-laden water, vegetation, and bare soil, respectively. Because the PSUIs represent the relative proportions of ground features in each mixed pixel in the MODIS image, they need to be calibrated by means of the reference abundance values of ground features from high spatial resolution remote sensing images (e.g., Landsat ETM+ or QuickBird image) using the FCLS method, which creates a calibration model for calculating the abundances of the components of every mixed pixel in MODIS images. Here, a Landsat ETM+/OLI image is taken as an illustrative example. Because it is difficult to distinguish the sediment-laden water from a water body when classifying an ETM+/OLI image, the sediment-laden water and water body are classified as the same type (water body). Moreover, water body, vegetation, and bare soil are three basic categories of ground features on the earth’s surface,46 which means that P0, P2, and P3 contain most of the spectral information of each pixel. Thus, P0, P2, and P3 are used for the calibration model. The steps for calibrating the PSUIs can be considered as follows:

  • (1) Acquire the ETM+/OLI image (path/row: 122/044, date: 2001324) corresponding to the date and region of the MODIS image (date: 2001324, time: 03:10; Pearl River Delta region of China) from the USGS Global Visualization Viewer (GloVis), and classify it as a water body, vegetation, or bare soil using the maximum likelihood classification (MLC) method. Many researchers have used the MLC method for Landsat image classification and obtained satisfactory performance,4753 and the classification accuracy produced by the MLC method has been found to be comparable to other classification methods, such as support vector machine.48,49,52

  • (2) Collect a series of quasiground feature samples (i.e., water body, vegetation, or bare soil are the main ones in each sample) from the MODIS image. Here, a uniform sampling cell of 3×3  pixels (3×3  km) was used for collecting these samples in order to reduce the projection error, and then the corresponding average values of P0, P2, and P3 were respectively calculated for each sampling cell.

  • (3) According to the latitudes and longitudes of the four corners of each sampling cell in the MODIS image, project the boundaries of the samples onto the ETM+/OLI image and the ETM+/OLI classification image (see Fig. 5). Then, respectively calculate the percentages of water body, vegetation, and bare soil pixels accounting for the total pixels in each projection scope in the ETM+/OLI classification image, which are taken as the reference abundances that are used to calibrate the PSUIs. The calibration model for the PSUIs can be expressed as

    Eq. (10)

    Yw=a10+a11P0+a12P2+a13P3,Yv=a20+a21P0+a22P2+a23P3,Ys=a30+a31P0+a32P2+a33P3,
    where Yw, Yv, and Ys denote the abundances of water body, vegetation, and bare soil, respectively, which are obtained from the ETM+/OLI classification image; P0, P2, and P3 represent, respectively, the PSUIs; and aij (i=1, 2, 3, j=0, 1, 2, 3) are the fitting coefficients.

  • (4) In the illustrative example, we took 189 samples from the MODIS and ETM+ classification images (Fig. 6). We substituted the abundance values of water body, vegetation, and bare soil obtained from the ETM+/OLI classification image and the PSUIs (P0, P2, and P3) obtained from the MODIS image for the samples into Eq. (10), and then obtained the fitting coefficient aij using a least squares method. The calibration model for each pixel in the MODIS image can be expressed as

    Eq. (11)

    Yw=0.5377+1.4790×P00.4161×P21.2738×P3  ,Yv=  1.6038  2.6723×P0+1.0573×P23.2340×P3,Ys=1.1416+1.1934×P00.6411×P2+4.5079×P3,
    where Yw, Yv, and Ys denote the abundances of water body, vegetation, and bare soil, respectively, and P0, P2, and P3 are the PSUIs.

    The test results for the calibration model are shown in Table 1. All of the multiple correlation coefficients are larger than 0.97, indicating that there are significant linear correlations between the PSUIs (P0, P2, and P3) that were derived from the MODIS data and the reference abundances of water body, vegetation, and bare soil that were obtained from the ETM+ classification image. The test results for the calibration model show that all the observed values for the F-test are evidently larger than the critical F-test value at the 99% confidence level [F0.01 (3,185)]. Thus, there is a marked regression relationship between the PSUIs and the abundances of water body, vegetation, and bare soil, which ensures performance accuracy. The significance test for each PSUI shows that all the significance probabilities are larger than 99.00% and that each index has a significant effect on the abundances. Thus, the calibration model is acceptable.

  • (5) Calculate the abundance of each component within each pixel in the MODIS image by substituting the PSUIs for every mixed pixel into the calibration model. If a calculated abundance fraction is less than 0, it is set to 0. The sum of the abundance fractions of different ground features within each pixel must be 1. If not, Yw, Yv, and Ys should be normalized using the following expressions:

    Eq. (12)

    Y=Yw+Yv+Ys,Yw=Yw/Y,Yv=Yv/Y,Ys=Ys/Y.

  • (6) Evaluate the accuracy of the component abundances obtained by decomposing the mixed pixels in MODIS images. In this accuracy evaluation, the error is defined by the difference between the calculated component abundance and the reference component abundance, where to ensure the objectivity of the accuracy evaluation, the reference abundance used to evaluate the accuracy of the calculated component abundances and the abundances employed to calibrate the PSUIs should be taken from the ETM+/OLI classification images at different times or in different scenes.

Fig. 5

Sampling cells of 3×3  pixels of ground feature mixtures in a MODIS image, ETM+ image, and ETM+ classification image with the MLC method (date: 2001324). Black quadrangles denote the boundaries of the sampling areas. (a) Water body, (b) water body, (c) bare soil, and (d) vegetation.

JARS_13_4_046509_f005.png

Fig. 6

(a) Distribution of sampling cells in the MODIS image. Red squares represent the locations of 189 sampling cells. The positions of these sampling cells are projected onto (b) the ETM+ image and (c) the ETM+ classification image (black squares).

JARS_13_4_046509_f006.png

Table 1

Test results for the calibration model.

Calculated modelMCCF-testF0.01 (3185)Significance probability (%)
P0P2P3
Yw0.9791412.93.999.8899.8899.88
Yv0.9711004.83.999.8299.7499.74
Ys0.9771303.43.999.8799.8199.56
Note: MCC: multiple correlation coefficients. MCC=ESSTSS, where the total sum of squares (TSS) is calculated by TSS=∑i=1n(Yi−Y¯)2, the explained sum of squares (ESS) is calculated by ESS=∑i=1n(Y^i−Y¯)2, Yi is the reference abundance of the i’th sample, Y^i is the estimated abundance of the i’th sample, Y¯ is the mean of the reference abundances of the samples, and n is the number of training samples (n=189).F-test: The observed value of F-test at 99% confidence level. F-test=ESS/pRSS/(n−p−1), where the residual sum of squares (RSS) is calculated by RSS=∑i=1n(Yi−Y^i)2, p is the number of explanatory variables (p=3), n-p-1 is the number of degrees of freedom, ESS and n are explained above.F0.01(3.185): the critical F-test value at 99% confidence level with a numerator degree of freedom of 3 and a denominator degree of freedom of 185. F0.01(3.185) is obtained from the table for critical values of the F distribution.54

The flowchart for the proposed approach to decomposing mixed pixels in MODIS images is shown in Fig. 7.

Fig. 7

Schematic description of the approach to decomposing mixed pixels in MODIS image data.

JARS_13_4_046509_f007.png

From the above, we can see that although Eq. (4) is still a linear mixture model as is the LSMM, this method using the third-order Bernstein basis functions is different from the LSMM-based approaches in that it does not need to resort to extracting endmember spectra. For the sake of convenience, hereinafter, using PSUIs for decomposing mixed pixels in the MODIS images is called the PSUI method.

3.

Experiments

3.1.

Experiment Design and Datasets

A calibration model used to calculate the abundances of every mixed pixel’s components in MODIS images was built based on a set of MODIS and ETM+ images of the Pearl River Delta region of China in Sec. 2. In this section, we present a performance evaluation of the calibration model for decomposing mixed pixels in MODIS data. There are two groups of experiments (Table 2).

Table 2

Basic information about six experiments.

Experiment nameExperiment areaSensorAcquisition datePath/rowNumber of samples
Experiments for applying the calibration modelE1In the Pearl River Delta region of ChinaMODIS2001324 (0310)300
ETM+2001324122/044
E2In the Pearl River Delta region of ChinaMODIS2016039 (0300)210
OLI2016038122/044
E3In the Kubuqi desert region of ChinaMODIS2014210 (0340)230
OLI2014209129/032
E4In the North China PlainMODIS2018108 (0300)210
OLI2018107122/037
E5In Texas of USAMODIS2015028 (1700)250
OLI2015027025/039
Experiment for method comparisonE6In the Pearl River Delta region of ChinaMODIS2001356 (0310)295
ETM+2001356122/044
Note: Acquisition date: in the form of YYYYDDD (YYYY: year; DDD: day of year).

One group (E1–E5) is conducted to apply the calibration model to MODIS data at different times or in different areas to test the robustness of the PSUI method. Two experiments (E1–E2) in this group, which are carried out to decompose mixed pixels in MODIS images in the Pearl River Delta region at different times, can be used to test whether a good unmixing process is performed for MODIS data at different times compared with that used for building the calibration model. In the Pearl River Delta region, the water bodies are sea water (main type), rivers, lakes, or dike-ponds; the vegetation is forests (main type), croplands, or grassland; and the bare soil is urban and built-up (main type), or barren/sparse vegetation. Three experiments (E3 to E5) in this group are then conducted in different areas with different types of water bodies, vegetation, or bare soil, which can be used to test whether a new calibration model is needed in different areas. E3 is carried out in the Kubuqi desert region of China, where the water bodies are mainly rivers and lakes, the vegetation is mainly croplands, and the bare soil is mainly barren or sparse vegetation. E4 is carried out in the North China Plain, where the water bodies are mainly lakes and rivers, the vegetation is mainly croplands, and the bare soil is mainly urban and built-up. E5 is carried out in Texas, where the water bodies are mainly sea water and lakes, the vegetation is mainly savannas and grassland, and the bare soil is mainly urban and built-up.

The other (E6) is conducted to compare the PSUI method with conventional methods (PPI, N-FINDR, SMACC, and VCA).

Six groups of datasets are to be tested in this section (see Table 2). Each dataset consists of a MODIS image (MOD021KM: level 1b calibrated, 1000×1000  m spatial resolution), derived from the LAADS DAAC, and a Landsat ETM+/OLI image (30×30  m spatial resolution), derived from the USGS GloVis, in the same area, and from the same day or from two consecutive days (Table 2).

3.2.

Application of the Calibration Model

In this section, we present the application of the calibration model [Eq. (11)] to MODIS images (see Table 2) in different areas or at different times to test the robustness and performance of the PSUI method (E1 to E5). The abundance maps of water body, vegetation, and bare soil for the MODIS images were then obtained (see Fig. 8).

Fig. 8

MODIS images (RGB: bands 7, 2, and 1) (first column) and their abundance maps of water body (Yw) (second column), vegetation (Yv) (third column), and bare soil (Ys) (fourth column) in five experiments (E1 to E5). (a)–(d) E1, (e)–(h) E2, (i)–(l) E3, (m)–(p) E4, and (q)–(t) E5.

JARS_13_4_046509_f008.png

Five sets of sampling grids of 3×3  pixels, which were randomly collected from the MODIS images, were taken as test samples to evaluate the accuracy of the calculated abundances in these experiments. The accuracy evaluation results of these five experiments (Table 3) demonstrate good accuracy for decomposing mixed pixels in MODIS images in different areas at different times by using the calibration model. Therefore, the calibration model can be used for MODIS data in different areas or at different times, which means that there is no need to build a calibration model for every MODIS image.

Table 3

Accuracy evaluation of the abundances calculated by using the calibration model.

Experiment nameGround featureME (%)MAE (%)P-10% (%)P-20% (%)RMSE
E1Water body2.13.490.097.30.06
Vegetation0.98.862.795.70.11
Bare soil1.29.163.393.00.11
E2Water body1.34.488.695.70.08
Vegetation4.07.471.491.00.11
Bare soil2.66.575.793.30.09
E3Water body0.82.494.897.40.03
Vegetation2.23.485.798.30.04
Bare soil3.04.085.296.50.04
E4Water body3.67.691.095.70.10
Vegetation1.76.080.595.20.09
Bare soil5.38.068.197.10.10
E5Water body0.82.896.098.40.05
Vegetation1.57.170.494.80.10
Bare soil2.37.169.696.40.10
Note: ME, mean error; MAE, mean absolute error. Error = calculated abundance – reference abundance, where reference abundance was derived from the ETM+/OLI classification image corresponding to the date and region of the MODIS image from which test samples were collected.P-10% or P-20%: percentage of the samples with error less than 10% or 20% accounting for total samples.RMSE: root-mean-square error.

3.3.

Comparison with Conventional Methods

To examine the effectiveness of the PSUI method, we compared it with the PPI, N-FINDR, SMACC, and VCA methods using the same MODIS image, against the abundance values of the ground features derived from the ETM+/OLI classification image from the same day as the MODIS image. The PPI, N-FINDR, SMACC, and VCA methods are widely applied for endmember extraction due to their light computational burden and clear conceptual meaning.15 Detailed descriptions of these four methods can be found in the literature.15,2025

In this experiment (E6), the MODIS image (date: 2001356, time: 03:10) and ETM+ image (path/row: 122/044, date: 2001356) used for the method comparison are taken from the same area but not at the same time as those used for calibrating the PSUIs (date: 2001324).

The 295 sampling points of 3×3  pixels that were randomly collected from the MODIS image (date: 2001356) were taken as test samples. As shown in Table 4, the mean error (ME), mean absolute error (MAE), root-mean-square error (RMSE), and root-mean-square abundance angle distance (rmsAAD) obtained by using the PSUI method are obviously smaller than those obtained by using the PPI, N-FINDR, SMACC, and VCA methods. Furthermore, the errors derived from the PSUI method are distributed around 0% and are centralized [see Fig. 9(a)], whereas those derived from the PPI, N-FINDR, SMACC, and VCA methods exhibit a more disperse distribution [see Figs. 9(b)9(e)]. The accuracy evaluation results demonstrate that the PSUI method outperforms the PPI, N-FINDR, SMACC, and VCA methods.

Table 4

Accuracy comparison of the PSUI, PPI, N-FINDR, SMACC, and VCA methods.

MethodGround featureME (%)MAE (%)P-10% (%)P-20% (%)RMSErmsAADRunning time (s)
PSUIWater body1.65.981.498.30.080.220.21
Vegetation2.99.164.790.20.12
Bare soil4.59.464.488.10.13
PPIWater body9.610.473.278.30.190.578.19
Vegetation19.722.039.750.50.29
Bare soil10.125.329.544.40.31
N-FINDRWater body15.320.012.551.50.220.385.53
Vegetation8.112.354.973.20.17
Bare soil7.314.840.764.70.18
SMACCWater body0.69.264.481.00.130.30
Vegetation5.513.748.169.50.18
Bare soil4.812.745.177.30.16
VCAWater body21.024.27.134.60.260.456.42
Vegetation13.914.745.469.20.20
Bare soil7.113.743.475.90.18
Note: ME, mean error; MAE, mean absolute error. Error = calculated abundance – reference abundance, where reference abundance was derived from the ETM+/OLI classification image corresponding to the date and region of the MODIS image from which test samples were collected.P-10% or P-20%: percentage of the samples with error less than 10% or 20% accounting for total samples.RMSE: root-mean-square error.rmsAAD: root-mean-square abundance angle distance. It demonstrates an overall measure of performance of each method. rmsAAD=[1N∑i=1N(AADai2)]1/2, where AADai=cos−1(aiTai^‖ai‖‖ai^‖) measures the similarity between reference abundances (ai) and calculated ones (ai^) of sampling grids; N is the number of sampling grids (N=295).The best results of the four algorithms are in bold font in the table.

Fig. 9

Error distributions of water body, vegetation, and bare soil abundances obtained from (a) the PSUI method, (b) the PPI method, (c) the N-FINDR method, (d) the SMACC method, and (e) the VCA method.

JARS_13_4_046509_f009.png

In the comparison experiment, the PSUI method and four conventional unmixing methods were performed with an Intel Core i7-8550U CPU running at 1.80 GHz with 8.0 GB RAM. The PPI, N-FINDR, and VCA methods were performed in MATLAB R2017b, and the running time of these methods was 8.19, 5.53 and 6.42 s, respectively. The running time for building and applying a calibration model for the PSUI method was 0.49 and 0.21 s. The SMACC method was performed in the Environment for Visualizing Images software (ENVI 5.1), which had the result that the running time of the method could not be obtained. The running times in Table 4 show that the PSUI method took less time than the PPI, N-FINDR, and VCA methods.

4.

Discussion

The existing methods of decomposing mixed pixels, based on either LSMM or NLSMM, are mainly based on pixel spectral information that is characterized by a single spectral curve composed of discrete data points and require extracting endmember spectra.1,15,16,18,20 The procedures adopted by the methods, such as the PPI21 and the SMACC,23 have been quite successful when pure pixels are present in the original image data. However, it is very difficult to find pure pixels containing only one ground object in MODIS images with low spatial resolution. Many authors have argued that there are no pure pixels in remote sensing images with low spatial resolution.17,27 Miao and Qi31 and Plaza et al.17 suggested that a trend in the hyperspectral imaging community was to design endmember identification algorithms that do not assume the presence of pure pixels to ensure the endmember accuracy and unmixing accuracy.

The PSUI method proposed herein provides a solution that is different from previous work on the effective decomposition of mixed pixels. This method does not need to resort to extracting endmember spectra from MODIS data. It was tested on five sets of MODIS and ETM+/OLI images, and satisfying unmixing results were obtained (see Fig. 8 and Table 3). The calibration model can be applied to MODIS data in different areas or at different times with high accuracy. The PSUI method was also compared with other methods using the same MODIS data, such as the PPI, N-FINDR, SMACC, and VCA, and the experimental results (Table 4) showed that the accuracy of the PSUI method was obviously higher than that of the PPI, N-FINDR, SMACC, or VCA methods.

In the PSUI method, the PSUIs quantify the relative proportions of spectrally distinct signals from several ground features in each mixed pixel of MODIS data, thus the indexes need to be calibrated with the abundance values of the ground features from a high spatial resolution remote sensing image such as Landsat ETM+ image. One might say that since the PSUIs need to be calibrated with the ETM+/OLI classification images, it would be more convenient to use the results from the ETM+ images directly. However, the low temporal resolution of the 16-day revisit cycle of Landsat ETM+ has long limited its use in many fields, such as studying global biophysical processes, understanding changes in the terrestrial carbon cycle, or mapping the quality and abundance of wildlife habitats.55,56 MODIS visits the globe once or twice per day with coarse resolution of 250 to 1000 m. In addition, the calibration model is applicable for MODIS data in different areas or at different times, which means that there is no need to build a calibration model for every MODIS image. One of the advantages of the PSUI method is that it combines MODIS data of high temporal resolution with Landsat ETM+ data of high spatial resolution, which may be the reason the new method is superior to the PPI, N-FINDR, SMACC, and VCA methods in terms of decomposition accuracy for mixed pixels in MODIS images.

As we all know, there are 15 reflective spectral channels valid on land in a MODIS image, and these are distributed over a wavelength range of 405 to 2155 nm. These 15 reflective spectral channels can reflect the key spectral characteristics of ground features, such as the locations and intensities of absorption and reflection bands, which are obviously demonstrated in a spectral curve. Three very different ground features (i.e., water body, vegetation, and bare soil) having spectral curves that are easily distinguishable based on their peak locations are involved in the unmixing process. Thus, a good unmixing process for the PSUI method can be performed. However, the PPI, N-FINDR, SMACC, and VCA methods were originally proposed for hyperspectral data,2124 and thus would not be expected to perform for multispectral data with limited spectral resolution as well as for hyperspectral data. Furthermore, the endmembers in these conventional methods are specific components, i.e., specific types of mineral or vegetation.2124 There may be several specific types of vegetation and bare soil in a MODIS image. However, in the method comparison experiment, mixed pixels in MODIS data were decomposed into three general categories of water body, vegetation, and bare soil. Thus, the performance of the conventional methods may be affected.

For the PSUI method, the training samples used to establish the calibration model were derived from a MODIS image and an ETM+ classification image from the same day and in the same area, which were in almost the same atmospheric conditions. Furthermore, the unmixing accuracies of MODIS images without atmospheric correction were good, whether the MODIS images were the same as that used for the calibration model or not (see Table 3). Thus, atmospheric correction was not necessary for the PSUI method, which could save time and reduce workload for time series analysis with MODIS imagery. To examine the effectiveness of the PSUI method, it was compared with the PPI, N-FINDR, SMACC, and VCA methods using the same MODIS image without atmospheric correction. The conventional methods did not perform so well in this comparison experiment because they all required atmospheric correction.2124

The PSUI method, which is based on third-order Bernstein basis functions and does not resort to extracting endmember spectra, has been shown to be effective in decomposing mixed pixels in MODIS data. However, it should be noted that this study was the first attempt to decompose mixed pixels by characterizing the spectral curves of the mixed pixels in MODIS data with a set of Bernstein basis functions. There are still some limitations for the PSUI method. First, the PSUI method is now only suitable for decomposition into three general components (water body, vegetation, and bare soil) in images acquired by a coarse resolution multispectral sensor (e.g., MODIS). It would not be able to decompose mixed pixels into specific vegetation or soil types. Future studies should be carried out to apply the PSUI method to much more complicated ground feature situations. There are two situations: (1) if some of the ground features have very similar spectral signatures, spatiotemporal information as well as spectral information from MODIS data should be utilized comprehensively; or (2) if the high reflectance of each ground feature appears at different wavelengths, Bernstein basis functions of a higher order should be utilized. Second, the calibration model, without atmospheric correction, might work only at low aerosol optical depth (AOD), as the shape of the reflectance spectra at the top of the atmosphere would be highly dependent on the AOD. The impact of absorption and scattering of atmospheric aerosol on reflectance data varies with wavelength, which would change the shape of the spectral reflectance curves and should be corrected by an atmospheric correction algorithm (e.g., the fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) algorithm57). A new calibration model should be built and applied based on MODIS data with atmospheric correction if AOD is high.

5.

Conclusions

In this paper, the PSUI method, which provides a solution that is different from previous work on the decomposition of mixed pixels, was proposed. This method does not need to resort to extracting endmember spectra from MODIS data, which provides a new way of decomposing mixed pixels to assure the unmixing accuracy. In the PSUI method, the spectral integral area that is in the range enclosed by the spectral reflectance curves of ground features and the x-axis (in Cartesian coordinates) and a set of third-order Bernstein basis functions are applied to characterize the spectral curves of mixed pixels in a MODIS image, and the derived PSUIs (i.e., the coefficients of the basis functions) are used for representing the spectral characteristics of the mixed pixels. Then the PSUIs are calibrated with the abundance values of the ground features from a high spatial resolution remote sensing image such as Landsat ETM+ image, which creates a calibration model for calculating the abundances of the components of every mixed pixel in MODIS images. The calibration model is applicable for MODIS images in different areas or at different times, which has been proved by the experimental results using five sets of MODIS and Landsat EMT+/OLI images. The PSUI method was compared with four conventional methods, i.e., the PPI, N-FINDR, SMACC, and VCA. And the comparison results show that the PSUI method outperforms the other four methods for decomposing mixed pixels in MODIS data. Although the PSUI method performs well for decomposing mixed pixels in MODIS images with low AOD into three general categories of water body, vegetation, and bare soil, further study is needed to apply the PSUI method to MODIS images with much more complicated ground feature situations or high AOD.

Acknowledgments

This work was supported by the Chinese National Science and Technology Major Project (Grant No. 2017ZX05008-001) and the National Natural Science Foundation of China (Grant No. 41872214) and was partially funded by the Special Fund from Zhejiang Provincial Government, China (zjcx. 2011 No. 98). The authors would like to express their gratitude to NASA and the USGS for providing remote sensing imageries (MODIS and ETM+/OLI images). We are grateful to both the editor and the anonymous reviewers for their constructive comments, which have improved this paper. Disclosures: The authors declare no conflict of interest.

References

1. 

A. Plaza et al., “A quantitative and comparative analysis of endmember extraction algorithms from hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 42 (3), 650 –663 (2004). https://doi.org/10.1109/TGRS.2003.820314 IGRSD2 0196-2892 Google Scholar

2. 

H. Pu et al., “A fully constrained linear spectral unmixing algorithm based on distance geometry,” IEEE Trans. Geosci. Remote Sens., 52 (2), 1157 –1176 (2014). https://doi.org/10.1109/TGRS.2013.2248013 IGRSD2 0196-2892 Google Scholar

3. 

B. H. Braswell et al., “A multivariable approach for mapping sub-pixel land cover distributions using MISR and MODIS: application in the Brazilian Amazon region,” Remote Sens. Environ., 87 (2–3), 243 –256 (2003). https://doi.org/10.1016/j.rse.2003.06.002 Google Scholar

4. 

J. P. Guerschman et al., “Estimating fractional cover of photosynthetic vegetation, non-photosynthetic vegetation and bare soil in the Australian tropical savanna region upscaling the EO-1 Hyperion and MODIS sensors,” Remote Sens. Environ., 113 (5), 928 –945 (2009). https://doi.org/10.1016/j.rse.2009.01.006 Google Scholar

5. 

L. Qu et al., “Estimating vegetation fraction using hyperspectral pixel unmixing method: a case study of a karst area in China,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7 (11), 4559 –4565 (2014). https://doi.org/10.1109/JSTARS.2014.2361253 Google Scholar

6. 

E. F. Lawley, M. M. Lewis and B. Ostendorf, “Evaluating MODIS soil fractional cover for arid regions, using albedo from high-spatial resolution satellite imagery,” Int. J. Remote Sens., 35 (6), 2028 –2046 (2014). https://doi.org/10.1080/01431161.2014.885150 IJSEDK 0143-1161 Google Scholar

7. 

X. Zhang et al., “Estimating ecological indicators of karst rocky desertification by linear spectral unmixing method,” Int. J. Appl. Earth Obs. Geoinf., 31 86 –94 (2014). https://doi.org/10.1016/j.jag.2014.03.009 Google Scholar

8. 

C. Dey et al., “Mixed pixel analysis for flood mapping using extended support vector machine,” in Digital Image Comput.: Tech. and Appl., 291 –295 (2009). https://doi.org/10.1109/DICTA.2009.55 Google Scholar

9. 

T. Bangira et al., “A spectral unmixing method with ensemble estimation of endmembers: application to flood mapping in the Caprivi floodplain,” Remote Sens., 9 (10), 1013 –1036 (2017). https://doi.org/10.3390/rs9101013 Google Scholar

10. 

T. C. Eckmann, D. A. Roberts and C. J. Still, “Using multiple endmember spectral mixture analysis to retrieve subpixel fire properties from MODIS,” Remote Sens. Environ., 112 (10), 3773 –3783 (2008). https://doi.org/10.1016/j.rse.2008.05.008 Google Scholar

11. 

N. Dobigeon et al., “Nonlinear unmixing of hyperspectral images: models and algorithms,” IEEE Signal Process. Mag., 31 (1), 82 –94 (2014). https://doi.org/10.1109/MSP.2013.2279274 ISPRE6 1053-5888 Google Scholar

12. 

S. Jia and Y. Qian, “Spectral and spatial complexity-based hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., 45 (12), 3867 –3879 (2007). https://doi.org/10.1109/TGRS.2007.898443 IGRSD2 0196-2892 Google Scholar

13. 

R. Heylen, M. Parente and P. Gader, “A review of nonlinear hyperspectral unmixing methods,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7 (6), 1844 –1868 (2014). https://doi.org/10.1109/JSTARS.2014.2320576 Google Scholar

14. 

N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEE Signal Process. Mag., 19 (1), 44 –57 (2002). https://doi.org/10.1109/79.974727 ISPRE6 1053-5888 Google Scholar

15. 

J. M. Bioucas-Dias et al., “Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5 (2), 354 –379 (2012). https://doi.org/10.1109/JSTARS.2012.2194696 Google Scholar

16. 

B. Somers et al., “Endmember variability in spectral mixture analysis: a review,” Remote Sens. Environ., 115 (7), 1603 –1616 (2011). https://doi.org/10.1016/j.rse.2011.03.003 Google Scholar

17. 

J. Plaza et al., “On endmember identification in hyperspectral images without pure pixels: a comparison of algorithms,” J. Math. Imaging Vis., 42 (2–3), 163 –175 (2012). https://doi.org/10.1007/s10851-011-0276-0 Google Scholar

18. 

C. Quintano et al., “Spectral unmixing,” Int. J. Remote Sens., 33 (17), 5307 –5340 (2012). https://doi.org/10.1080/01431161.2012.661095 IJSEDK 0143-1161 Google Scholar

19. 

D. C. Heinz and C. I. Chang, “Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., 39 (3), 529 –545 (2001). https://doi.org/10.1109/36.911111 IGRSD2 0196-2892 Google Scholar

20. 

J. Franke et al., “Hierarchical multiple endmember spectral mixture analysis (MESMA) of hyperspectral imagery for urban environments,” Remote Sens. Environ., 113 (8), 1712 –1723 (2009). https://doi.org/10.1016/j.rse.2009.03.018 Google Scholar

21. 

J. W. Boardman, F. A. Kruse and R. O. Green, “Mapping target signatures via partial unmixing of AVIRIS data,” in Fifth Annu. JPL Airborne Earth Sci. Workshop, 23 –26 (1995). Google Scholar

22. 

M. E. Winter, “N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data,” 266 –275 (1999). https://doi.org/10.1117/12.366289 Google Scholar

23. 

J. H. Gruninger, A. J. Ratkowski and M. L. Hoke, “The sequential maximum angle convex cone (SMACC) endmember model,” Proc. SPIE, 5425 1 –14 (2004). https://doi.org/10.1117/12.543794 PSISDG 0277-786X Google Scholar

24. 

J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: a fast algorithm to unmix hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 43 (4), 898 –910 (2005). https://doi.org/10.1109/TGRS.2005.844293 IGRSD2 0196-2892 Google Scholar

25. 

Research Systems Inc. (RSI), “Spectral tools,” ENVI User’s Guide, 743 –852 2004). Google Scholar

26. 

J. D. Bayliss, J. A. Gualtieri and R. F. Cromp, “Analyzing hyperspectral data with independent component analysis,” Proc. SPIE, 3240 133 –143 (1998). https://doi.org/10.1117/12.300050 PSISDG 0277-786X Google Scholar

27. 

V. F. Haertel and Y. E. Shimabukuro, “Spectral linear mixing model in low spatial resolution image data,” IEEE Trans. Geosci. Remote Sens., 43 (11), 2555 –2562 (2005). https://doi.org/10.1109/TGRS.2005.848692 IGRSD2 0196-2892 Google Scholar

28. 

J. Wang and C. I. Chang, “Applications of independent component analysis in endmember extraction and abundance quantification for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., 44 (9), 2601 –2616 (2006). https://doi.org/10.1109/TGRS.2006.874135 IGRSD2 0196-2892 Google Scholar

29. 

P. Sajda, S. Du and L. C. Parra, “Recovery of constituent spectra using non-negative matrix factorization,” Proc. SPIE, 5207 321 –331 (2003). https://doi.org/10.1117/12.504676 PSISDG 0277-786X Google Scholar

30. 

V. P. Pauca, J. Piper and R. J. Plemmons, “Nonnegative matrix factorization for spectral data analysis,” Linear Algebra Appl., 416 (1), 29 –47 (2006). https://doi.org/10.1016/j.laa.2005.06.025 Google Scholar

31. 

L. Miao and H. Qi, “Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization,” IEEE Trans. Geosci. Remote Sens., 45 (3), 765 –777 (2007). https://doi.org/10.1109/TGRS.2006.888466 IGRSD2 0196-2892 Google Scholar

32. 

A. Bateson and B. Curtiss, “A method for manual endmember selection and spectral unmixing,” Remote Sens. Environ., 55 (3), 229 –243 (1996). https://doi.org/10.1016/S0034-4257(95)00177-8 Google Scholar

33. 

D. Gómez-Palacios, M. A. Torres and E. Reinoso, “Flood mapping through principal component analysis of multitemporal satellite imagery considering the alteration of water spectral properties due to turbidity conditions,” Geomat. Nat. Hazards Risk, 8 (2), 607 –623 (2017). https://doi.org/10.1080/19475705.2016.1250115 Google Scholar

34. 

C. A. Bateson, G. P. Asner and C. A. Wessman, “Endmember bundles: a new approach to incorporating endmember variability into spectral mixture analysis,” IEEE Trans. Geosci. Remote Sens., 38 (2), 1083 –1094 (2000). https://doi.org/10.1109/36.841987 IGRSD2 0196-2892 Google Scholar

35. 

C. I. Chang et al., “A new growing method for simplex-based endmember extraction algorithm,” IEEE Trans. Geosci. Remote Sens., 44 (10), 2804 –2819 (2006). https://doi.org/10.1109/TGRS.2006.881803 IGRSD2 0196-2892 Google Scholar

36. 

X. Geng et al., “A Gaussian elimination based fast endmember extraction algorithm for hyperspectral imagery,” ISPRS J. Photogramm., 79 211 –218 (2013). https://doi.org/10.1016/j.isprsjprs.2013.02.020 IRSEE9 0924-2716 Google Scholar

37. 

B. Luo and J. Chanussot, “Unsupervised classification of hyperspectral images by using linear unmixing algorithm,” in 16th IEEE Int. Conf. Image Process., 2877 –2880 (2009). https://doi.org/10.1109/ICIP.2009.5413491 Google Scholar

38. 

D. G. T. Denison, B. K. Mallick and A. F. M. Smith, “Automatic Bayesian curve fitting,” J. R. Stat. Soc. B, 60 (2), 333 –350 (1998). https://doi.org/10.1111/1467-9868.00128 JSTBAJ 0035-9246 Google Scholar

39. 

C. H. Garcia-Capulin et al., “A hierarchical genetic algorithm approach for curve fitting with B-splines,” Genet. Program. Evol. Mach., 16 (2), 151 –166 (2015). https://doi.org/10.1007/s10710-014-9231-3 Google Scholar

40. 

P. Sablonniére, “Spline and Bézier polygons associated with a polynomial spline curve,” Comput. Aided Design, 10 257 –261 (1978). https://doi.org/10.1016/0010-4485(78)90061-1 CAIDA5 0010-4485 Google Scholar

41. 

F. Guo et al., “Cloud detection method based on spectrum area ratios in MODIS data,” Can. J. Remote Sens., 41 (6), 561 –576 (2015). https://doi.org/10.1080/07038992.2015.1112729 CJRSDP 0703-8992 Google Scholar

42. 

S. K. McFeeters, “The use of the normalized difference water index (NDWI) in the delineation of open water features,” Int. J. Remote Sens., 17 (7), 1425 –1432 (1996). https://doi.org/10.1080/01431169608948714 IJSEDK 0143-1161 Google Scholar

43. 

K. Rokni et al., “Water feature extraction and change detection using multitemporal Landsat imagery,” Remote Sens., 6 (5), 4173 –4189 (2014). https://doi.org/10.3390/rs6054173 Google Scholar

44. 

A. K. Bhandari, A. Kumar and G. K. Singh, “Improved feature extraction scheme for satellite images using NDVI and NDWI technique based on DWT and SVD,” Arab. J. Geosci., 8 (9), 6949 –6966 (2015). https://doi.org/10.1007/s12517-014-1714-2 Google Scholar

45. 

Y. Deng et al., “RNDSI: a ratio normalized difference soil index for remote sensing of urban/suburban environments,” Int. J. Appl. Earth Obs. Geoinf., 39 40 –48 (2015). https://doi.org/10.1016/j.jag.2015.02.010 Google Scholar

46. 

S. Aggarwal, “Principles of remote sensing,” in Training Workshop Satell. Remote Sens. and GIS Appl. in Agric. Meteorol., (2003). Google Scholar

47. 

M. C. Henry, “Comparison of single- and multi-date Landsat data for mapping wildfire scars in Ocala National Forest, Florida,” Photogramm. Eng. Rem. Sens., 74 (7), 881 –891 (2008). https://doi.org/10.14358/PERS.74.7.881 Google Scholar

48. 

J. R. Otukei and T. Blaschke, “Land cover change assessment using decision trees, support vector machines and maximum likelihood classification algorithms,” Int. J. Appl. Earth Obs. Geoinf., 12 S27 –S31 (2010). https://doi.org/10.1016/j.jag.2009.11.002 Google Scholar

49. 

D. Lu et al., “A comparative analysis of approaches for successional vegetation classification in the Brazilian Amazon,” GISci. Remote Sens., 51 (6), 695 –709 (2014). https://doi.org/10.1080/15481603.2014.983338 Google Scholar

50. 

P. S. Sisodia, V. Tiwari and A. Kumar, “A comparative analysis of remote sensing image classification techniques,” in Int. Conf. Adv. in Comput. Commun. and Inf. (ICACCI), 1418 –1421 (2014). Google Scholar

51. 

P. S. Sisodia, V. Tiwari and A. Kumar, “Analysis of supervised maximum likelihood classification for remote sensing image,” in Int. Conf. Recent Adv. and Innov. Eng. (ICRAIE-2014), 1 –4 (2014). Google Scholar

52. 

S. M. M. Rasel et al., “Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment,” Proc. SPIE, 9864 986409 (2016). https://doi.org/10.1117/12.2222960 PSISDG 0277-786X Google Scholar

53. 

H. Kim, C. J. Cieszewski and R. C. Lowe, “Estimation of premature forests in Georgia (USA) using U.S. Forest Service FIA data and Landsat imagery,” J. For. Res., 28 (6), 1249 –1260 (2017). https://doi.org/10.1007/s11676-017-0389-4 Google Scholar

54. 

C. Dougherty, Statistical Tables, 5 2nd Ed.Oxford University Press, Oxford (2002). Google Scholar

55. 

F. Gao et al., “On the blending of the Landsat and MODIS surface reflectance: predicting daily Landsat surface reflectance,” IEEE Trans. Geosci. Remote Sens., 44 (8), 2207 –2218 (2006). https://doi.org/10.1109/TGRS.2006.872081 IGRSD2 0196-2892 Google Scholar

56. 

T. Hilker et al., “A new data fusion model for high spatial- and temporal-resolution mapping of forest disturbance based on Landsat and MODIS,” Remote Sens. Environ., 113 (8), 1613 –1627 (2009). https://doi.org/10.1016/j.rse.2009.03.007 Google Scholar

57. 

T. Cooley et al., “FLAASH, a MODTRAN4-based atmospheric correction algorithm, its application and validation,” in IEEE Int. Geosci. and Remote Sens. Symp., 1414 –1418 (2002). https://doi.org/10.1109/IGARSS.2002.1026134 Google Scholar

Biography

Yi Qin received her BS degree in geographical information system from Zhejiang University, Hangzhou, China, in 2013. And she is currently pursuing her PhD in geology at the same university. Her research has been concerned with multispectral/hyperspectral remote sensing and applications of remote sensing (e.g., groundwater potential mapping).

Feng Guo received his PhD in geology from Zhejiang University, Hangzhou, China, in 2015. He is currently an engineer with the Geomatics Center of Zhejiang, Hangzhou, China. His research has been concerned with hyperspectral remote sensing.

Lejun Zou received his BS, MSc, and PhD degrees in geology from Zhejiang University, Hangzhou, China, in 1986, 1989, and 2010, respectively. He is currently a professor with the School of Earth Sciences, Zhejiang University, Hangzhou, China. His research has been concerned with remote sensing and GIS, finite element numerical simulation, and 3-D geological modeling and visualization.

Xiaohua Shen received his BS and PhD degrees in geology from Zhejiang University, Hangzhou, China, in 1982 and 2010, respectively. He is currently a professor with the School of Earth Sciences, Zhejiang University, Hangzhou, China. His research has been concerned with applications of remote sensing in environment and geology, structural geometric analysis, and fractals in geoscience.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yi Qin, Feng Guo, Yupeng Ren, Xin Wang, Juan Gu, Jingyu Ma, Lejun Zou, and Xiaohua Shen "Decomposition of mixed pixels in MODIS data using Bernstein basis functions," Journal of Applied Remote Sensing 13(4), 046509 (18 November 2019). https://doi.org/10.1117/1.JRS.13.046509
Received: 15 April 2019; Accepted: 25 October 2019; Published: 18 November 2019
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
MODIS

Calibration

Vegetation

Data modeling

Reflectivity

Image classification

Spatial resolution

Back to Top