Research Papers

Algorithms for Moderate Resolution Imaging Spectroradiometer cloud-free image compositing

[+] Author Affiliations
Hai-bing Xiang

Chinese Academy of Sciences, Institute of Remote Sensing and Digital Earth, State Key Laboratory of Remote Sensing Science, P.O. Box 9718, Beijing 100101, China

Hebei Key Laboratory of Mathematic Calculation and Application, Shijiazhuang 050024, China

University of Chinese Academy of Sciences, College of Resources and Environment, P.O. Box 9718, Beijing 100049, China

Jin-song Liu

Hebei Key Laboratory of Mathematic Calculation and Application, Shijiazhuang 050024, China

Hebei Normal University, College of Mathematics and Information, Shijiazhuang 050024, China

Mobile Internet of Things Institute of Hebei Normal University, Shijiazhuang 050024, China

Chun-xiang Cao

Chinese Academy of Sciences, Institute of Remote Sensing and Digital Earth, State Key Laboratory of Remote Sensing Science, P.O. Box 9718, Beijing 100101, China

University of Chinese Academy of Sciences, College of Resources and Environment, P.O. Box 9718, Beijing 100049, China

Min Xu

Chinese Academy of Sciences, Institute of Remote Sensing and Digital Earth, State Key Laboratory of Remote Sensing Science, P.O. Box 9718, Beijing 100101, China

University of Chinese Academy of Sciences, College of Resources and Environment, P.O. Box 9718, Beijing 100049, China

J. Appl. Remote Sens. 7(1), 073486 (Nov 05, 2013). doi:10.1117/1.JRS.7.073486
History: Received March 14, 2013; Revised August 7, 2013; Accepted September 26, 2013
Text Size: A A A

Open Access Open Access

Abstract.  The presence of clouds is the biggest obstacle in the investigation of land cover, and many techniques have been developed to detect clouds. However, few indicators have been proposed for the detection of cloud-free conditions. To address this, we propose two indicators for use in compositing 8-day cloud-free images: the B1/7 ratio is the ratio of the band 1 reflectance to the band 7 reflectance of Terra surface reflectance images (MOD09GA), and saturation refers to the color saturation of these images. Here, we describe the principles underlying these two indicators and analyze their characteristics for vegetation, water, urban, and nonvegetation pixels under cloud-free, cloud shadow, and cloudy conditions using MOD09GA from October 16 to 23, 2007, in North China (sample A) and using data published by the U.S. Geological Survey (USGS). We found that the B1/7 ratio and saturation are suitable for extracting cloud-free pixels over land and water, respectively; therefore, we combined these two indicators to develop a single-unified model. In particular, our results demonstrate that the pixels exhibiting the lowest B1/7 ratios should be adopted as cloud-free pixels over land when the value of B1/7 for land surfaces is between 0 and 1, and the surface reflectance of Moderate Resolution Imaging Spectroradiometer band 1 is less than 0.3. Otherwise, the pixels exhibiting the greatest saturation values should be adopted. We used our model to composite cloud-free images for two additional regions in China: the Tarim basin (sample B) from October 2 to 9, 2012, and the coastal areas of southeastern China (sample C) from April 15 to 23, 2013. We compared the cloud-free images of these regions with the 8-day surface reflectance product (MOD09A1) with respect to clouds, cloud shadow, and cirrus clouds, and we found that our proposed cloud-free image compositing approach can accurately eliminate both clouds and cirrus clouds. Specifically, the percentage of residual cloud pixels in sample C was found to be less than that in MOD09A1. Moreover, in the cloud-free images obtained using our newly developed method, cloud-free pixels are typically associated with greater sensor zenith angles and smaller scatter angles than those in MOD09A1. However, our method retains some limitations. In particular, 9.68, 33.22, and 33.00% of cloud-shadow pixels remain in the cloud-free images for samples A, B, and C, respectively.

Figures in this Article

Clouds play an important role in determining weather, climate, and other short-term environmental change, and more than 50% of the Earth’s daily surface area is covered by clouds.1,2 Moreover, these clouds hamper observation of the land surface by the Moderate Resolution Imaging Spectroradiometer (MODIS); therefore, accurate cloud detection is considered to be a crucial component in eliminating the impact of clouds on remote-sensing products.3,4

Three types of information can be used to detect clouds: spectral, texture, and multitemporal information. Spectral information was first utilized to distinguish clouds from the Earth surface using classification methods,58 in which thresholds in the value of one band or the difference between two bands were considered.3,4,9 Subsequently, texture information was used to distinguish clouds from the surface based on thresholds in heteroscedasticity. Such identification is possible because the reflectance of cloud is greater than that of the land surface, which results in greater variance along cloud edges, particularly over the oceans.1012 With increases in the availability of remote-sensing data, researchers have started to use multitemporal information to distinguish clouds from the Earth surface based on the differences between appropriate benchmarks and series of daily images; such methods are based on the assumption that the reflectance of the Earth surface is more stable than that of cloud-affected pixels, which exhibit frequent variations.1316 Many satellite-based instruments, ranging from the Advanced Very High Resolution Radiometer (AVHRR) to MODIS, employ these methods. For instance, the International Satellite Cloud Climatology Project (ISCCP)17,18 and the AVHRR Processing Scheme Over Cloud Land and Ocean (APOLLO) cloud detection algorithm19,20 employ only spectral information, and the Cloud Advanced Very High Resolution Radiometer (CLAVR) employs a series of spectral and spatial variability tests to detect clouds.1,2123 Some researchers have also considered the use of temporal information for cloud detection.4,9,2426 However, the computational power required for such methods increases with the complexity of the cloud detection algorithms adopted. Ultimately, obtaining composite cloud-free images based on the results of cloud detection has become a time-consuming process. This is evidenced particularly by the case of MOD09A1 published by the United States Geological Survey (USGS), in which only the pixels with the highest quality are considered; in this instance, cloud-cover pixels, cloud-shadow pixels, high-angle pixels, and high-aerosol pixels have been removed.27

Obtaining data relating to land surface characteristics is one of the primary challenges in land cover research. To address this, some algorithms have been proposed, in which cloud-free pixels are selected based on quantitative physical indexes such as the maximum normalized difference vegetation index (NDVI)28 and the lowest MODIS band 3 surface reflectance obtained during a specific time period.29 Maximum NDVI has been used previously as an indicator in gridded vegetation indices (MOD13) for vegetation monitoring. However, the use of this index typically leads to the selection of the pixel with the greatest sensor angle. Conversely, the minimum reflectance value for MODIS band 3, which represents the blue band, tends to consider shadow pixels as cloud-free pixels. Ma et al. used min[tan1(B1/(B2B1)] (where B1 and B2 are the bands 1 and 2 references of MODIS, respectively) to select cloud-free pixels.4 However, this model was shown to be ineffective in distinguishing between clouds, nonvegetated land, and shadows on vegetated land. The Canada Centre for Remote Sensing (CCRS) developed a new compositing technology that employs six criteria to obtain pixels of six different classes, in addition to mixed pixels. The CCRS researchers also compared max(NDVI), min(B1), and max[max(B2,B6)/B3] with their own multicriteria scheme and reported some details regarding the effects of the bidirectional reflectance distribution function (BRDF);9 however, many criteria and classification steps are required for the CCRS compositing technology, and this method is associated with considerable uncertainty.

The present study aims to obtain highly accurate composite cloud-free images by employing the fewest criteria possible. According to Kaufman et al.30 and Levy et al.,31 the reference ratio B1/7 between reflectances of 0.66 and 2.1 μm (corresponding to bands 1 and 7 of MODIS, respectively) is a good indicator of cloud cover for land areas. Moreover, as clouds are typically white or gray color, their saturation is lower than that of water pixels in a hue–saturation–intensity (HSI) model, making saturation a good indicator of cloud-free regions for water areas. Therefore, in the present study, we propose B1/7 ratio and saturation as appropriate indicators for the selection of cloud-free pixels on land and water, respectively, and present a set of algorithms for compositing cloud-free images for China.

Underlying Principles of the B1/7 Ratio

Previous studies have shown that 2.1-μm-wavelength images are not affected considerably by atmospheric scattering;30 thus, this wavelength can pass through all types of aerosol except dust. Moreover, for vegetated areas, the apparent reflectance at 2.1 μm is twice that at 0.66 μm when a given region is cloud free. Accordingly, the apparent reflectance at 2.1 μm is often used to estimate the surface reflectance at 0.66 μm.30 These wavelengths (i.e., 0.66 and 2.1 μm) are the central wavelengths of MODIS bands 1 and 7, respectively, and are typically expressed as B1 and B7. The reflectance ratio of these bands, B1/7, for a given land surface has been shown to be positively correlated with scattering angle and the vegetation index derived from the MODIS near- and short-wave infrared bands.31Display Formula

B1/7=B1/7NDVISWIR+0.002Θ0.27,(1)
where Θ is the scattering angle and defined as in Eq. (2). B1/7NDVISWIR is a piecewise function related to the vegetation index for shortwave infrared radiation [Eq. (3)], and NDVISWIR is the vegetation index derived from MODIS near- and short-wave infrared bands [see Eq. (4)]. Display Formula
Θ=cos1[cosθ0cosθ+sinθ0sinθcos(φφ0)].(2)

In Eq. (2), θ0, θ, φ0, and φ are the solar zenith angle, the sensor zenith angle, the azimuth of the sun, and the azimuth of the sensor, respectively. Display Formula

{B1/7NDVISWIR=0.48B1/7NDVISWIR=0.58B1/7NDVISWIR=0.48+0.2(NDVISWIR0.25)forNDVISWIR<0.25NDVISWIR>0.750.25NDVISWIR0.75(3)
Display Formula
NDVISWIR=(ρ5ρ7)/(ρ5+ρ7).(4)

In Eq. (4), ρ5 and ρ7 are the apparent reflectance of MODIS bands 5 and 7, respectively.

It is clear from Eq. (3) that B1/7NDVISWIR ranges from 0.48 to 0.58 and is correlated positively with NDVISWIR. Moreover, according to a previously described geometric optics model,32NDVISWIR is proportional to the sensor zenith angle (i.e., NDVISWIR increases with increasing sensor zenith angle for a given vegetation pixel), and B1/7 increases with NDVISWIR.

As the Terra satellite is sun synchronous, θ0 and φ0 can be considered constants, such that θ and φ are the primary factors determining Θ. Observations collected by the Terra satellite can be separated into two groups based on (φφ0): the backward hemisphere with (φφ0) 0 to 90 deg or 270 to 360 deg and the forward hemisphere with (φφ0) 90 to 270 deg. Values of both Θ and B1/7 are typically smaller for the forward hemisphere observations than for the backward hemisphere; therefore, the pixels of the forward hemisphere observations are typically selected first. To sum up, selecting the pixel with the smallest B1/7 for a given period to represent a cloud-free pixel is equivalent to choosing the pixel in which the sum of NDVISWIR and Θ is the smallest.

Underlying Principles of Saturation

The HSI model scales color in hue, saturation, and intensity spaces. Hue characterizes the main color received by the observer. Conversely, saturation represents the degree to which the solid color is diluted by white, and intensity is a measure of the brightness of the color. Thus, saturation decreases as the amount of white in the mix increases33 and can be defined by a saturation equation as follows: Display Formula

S=13(R+B+G)[min(R,G,B)],(5)
where S represents saturation, and R, G, and B represent red, green, and blue rays, respectively; all variables lie between 0 and 1. The min(R,G,B) function finds the variable with the lowest value.

Pixels can be classified as cloud-free, cloud shadow, or cloud pixels depending on the locations of the sensor, sun, and clouds (Fig. 1). The saturation of cloud-free, cloud shadow, and cloud pixels can be expressed as shown in Eqs. (6), (7), and (8), respectively. Display Formula

Sa=13(rR+rG+rB)[min(rR,rG,rB)],(6)
where Sa is the saturation of a cloud-free pixel, and rR, rG, and rB are the reflectances of MODIS bands 1, 4, and 3, respectively. Display Formula
Sb=13(tR*rR+tG*rG+tB*rB)[min(tR*rR,tG*rG,tB*rB)],(7)
where Sb represents the saturation of a cloud-shadow pixel, t is the transmissivity of the cloud, and tR, tG, and tB are the transmissivity values of cloud for MODIS bands 1, 4, and 3, respectively. Display Formula
Sc=13*[min(tR*rR+ρR,tB*rB+ρR,tG*rG+ρG)](tR*rR+tG*rG+tB*rB)+ρR+ρG+ρB(8)
Display Formula
ρ=1t,(9)

Graphic Jump LocationF1 :

A model map of cloud-free, cloud shadow, and cloud pixels.

In Eq. (8), Sc is the saturation of a cloud pixel, ρ is the reflectance of cloud, and ρR, ρG, and ρB are the reflectance of the cloud for MODIS bands 1, 4 and 3, respectively.

The diameters of droplets and dust in clouds and fog in the atmosphere are typically in the range 5 to 100 μm, which is much greater than the wavelength range of visible light. Therefore, random scattering occurs when visible light passes through these particles.34 The scattering coefficients are the same for the red, green, and blue bands; therefore, Eqs. (7) and (8) can be simplified to produce Eqs. (10) and (11), respectively. Display Formula

Sb=13t*(rR+rG+rB)[t*min(rR,rG,rB)](10)
Display Formula
Sc=13*t*[min(rR,rB,rG)]+ρt*(rR+rG+rB)+ρ.(11)

When the cloud is so thick that t approaches 0, Sb also approaches 0; in such cases, the true value of Sb is 0 and Sc tends to 0, which is in accordance with our knowledge that clouds and fog are typically white or gray. Moreover, when thinner clouds or fewer aerosols are present in the atmosphere, more visible light will reach the Earth’s surface; in such cases, Sb will be equal to Sa, and Sc will be smaller than Sa according to Eqs. (10) and (11).

Algorithm for Integrated Cloud-Free Image Synthesis

We developed a combined algorithm for cloud-free image compositing by combining B1/7 with saturation. Our model adopts the land surface reflectance images for a specific period as the candidate dataset (MOD09GA) and calculates B1/7 from the surface reflectance data of bands 1 and 7 and pixel saturation S from the surface reflectance data of bands 1, 4, and 3. Assuming a land surface mask (LSM) of 1, which means it is land pixel, B1/7 of (0, 1], and B1<0.3, the pixel exhibiting the lowest value of B1/7 is selected as a cloud-free pixel; otherwise, the pixel exhibiting the greatest saturation is selected as a cloud-free pixel. The LSM is a land/water flag and can be extracted from the 1-km State QA Descriptions (16-bit) layer in MOD09GA. Our newly developed model can be used to extract an index image, HC_time, in which the value of each pixel corresponds to the date of the image least contaminated by cloud, selected from a set of multitemporal remote-sensing images obtained within a certain period. Then, cloud-free images for the surface reflectance (LSREF_clear) can be built based on this index image. The compositing workflow is presented in Fig. 2.

Graphic Jump LocationF2 :

A flow chart illustrating the application of the integrated compositing algorithm.

Data Processing

We employed three samples in the present study, hereafter referred to as samples A, B, and C (Fig. 3). We used sample A to assess the feasibility of distinguishing cloud-free pixels for different land cover types, considering a particular region in northern China (34°53′ to 43°26′ N, 113°49′ to 122°41′ E) that includes Hebei Province, Beijing, Tianjin, Shandong Province, and Bohai Bay. Sample A includes nine land cover classes, divided according to the Land Cover Type Yearly L3 Global 500-m product (MCD12Q1) (Fig. 4).

Graphic Jump LocationF3 :

A map of the study area. The BaseMap is from ArcGIS Online.

Graphic Jump LocationF4 :

The land-cover types for sample A.

We used samples B and C to validate the algorithm for deserts, snowy regions, and tropical areas often covered by clouds. The region considered for sample B is located to the south of the Tarim basin and the north of the Tianshan Mountains; 74% and 10% of this sample can be considered deserts and snowy areas, respectively. Conversely, the study region for sample C is located in southern China, adjacent to the South China Sea, in an area often covered by cloud; thus, it was difficult to obtain cloud-free images for this region.

We employed three separate MODIS products, all of which can be downloaded freely using the Warehouse Inventory Search Tool (WIST): the Surface Reflectance 8-Day L3 Global 500 m (MOD09A1), the Surface Reflectance Daily L2G Global 1 km and 500 m (MOD09GA), and the MODIS Land Cover Type Yearly L3 Global 1 km SIN Grid product (MCD12Q1). MO09A1 provides bands 1 to 7 at a resolution of 500 m in an 8-day gridded level-3 product in sinusoidal projection. Each MOD09A1 pixel contains the best possible level-2 gridded (L2G) observation for a given 8-day period, selected based on high-observation coverage, low-view angle, the absence of clouds or cloud shadow, and aerosol loading. MOD09GA provides bands 1 to 7 in a daily-gridded L2G product in sinusoidal projection including 500-m reflectance values and 1-km observation and geolocation statistics. MCD12Q1 incorporates five different land cover classification schemes, derived using a supervised decision-tree classification method. We used land cover type 4 of the MODIS-derived net primary production (NPP) scheme and type 5. Further details of the data used are presented in Table 1.

Table Grahic Jump Location
Table 1MODIS data list.

All preprocessing of these data was performed using the MODIS Reprojection Tool (MRT). Sample A was used to conduct geographic transformations and to mosaic the four parts of the data. Finally, spatial subsets of the images were obtained using a 500×500m mask for the study area. Samples B and C were transformed from the hierarchical data format (HDF) to the tagged image file format (TIFF) for which the projection was sinusoidal and the resolution was 463.31×463.31m.

Analysis of B1/7 Ratio

To estimate the ability of B1/7 to identify cloud-free pixels according to the MODIS land cover type 4 for NPP, we reclassified the land cover into four groups: water, vegetation (including all vegetation cover classes), nonvegetation, and urban. We selected the samples according to the 1-km State QA Descriptions in MOD09GA (https://lpdaac.usgs.gov/products/modis_products_table/mod09ga), which includes several features including cloud state and a cloud shadow mask. The scatter maps of B1 and B7 for these samples for different land cover types under cloud-free, cloud, and cloud-shadow conditions are illustrated in Fig. 5.

Graphic Jump LocationF5 :

A scatter map for B1 and B7: (a) cloud-free pixels used for cloud comparison, (b) cloud pixels, (c) cloud-free pixels used for shadow comparison, and (d) cloud-shadow pixels. C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples for (a) and (b) and for (c) and (d) were obtained from different daily images of the same pixels.

Some fundamental conclusions can be deduced from the comparison of Figs. 5(a) and 5(b). B1/7 was found to be greater than 1 for cloud pixels and very high (i.e., greater than that under cloud-free conditions) for water areas under cloudy conditions. For urban and vegetation areas under cloud-free conditions, B1/7 was found to be less than and greater than 1, respectively. Conversely, under cloudy conditions, we found B1/7 to be around 1 and greater than the value under cloud-free conditions. Comparison of Figs. 5(c) and 5(d) demonstrates that B1/7 of shadow pixels for land areas is around 1 and is higher than B1/7 under cloud-free conditions. However, for water areas, we found B1/7 to be less than 0 (primarily because B1 is less than 0), except under cloud-free conditions. In summary, our results demonstrate that B1/7 is typically higher for cloud pixels than for cloud-free pixels (Table 2).

Table Grahic Jump Location
Table 2Slopes of the regression equation (intercept=0) for different land cover types under different cloud cover conditions.

To investigate our results further, we plotted B1/7 values for cloud-free, cloud shadow, and cloud conditions for the different land cover types (Fig. 6) using the data presented in Table 2. We found B1/7 to be in the range 0 to 1 for land areas under cloud-free conditions; moreover, our results suggest that B1/7 is always lower under cloud-free conditions than under cloudy conditions. Comparison of shadow and cloud-free pixels demonstrates the existence of several pixels, in which B1/7 is lower for shadow pixels than for cloud-free pixels. However, no such pixels were observed over water areas. Therefore, in cases where a given pixel could be considered to be cloud-free, cloud shadow, or cloud depending on the observation time, we selected the period with the minimum value of B1/7 to represent the period with clear conditions.

Graphic Jump LocationF6 :

A comparison of the B1/7 values for cloud-free, cloud shadow, and cloud conditions for the following land cover types: (a) urban, (b) water, (c) vegetation (Veg), (d) nonvegetation (NoV). C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples for (a) and (b) and for (c) and (d) were obtained from different daily images of the same pixels.

Analysis of Saturation

To investigate whether the pixel exhibiting the maximum saturation can be considered a cloud-free pixel, we calculated saturation from MODIS land surface reflectance using Eq. (5) (Fig. 7). In particular, we calculated R, G, and B from bands 1, 4, and 3, respectively. In general, we found that the saturation of cloud pixels to be lower than that of cloud-free pixels for all types of land cover, although the saturation for cloud shadow pixels for water and nonvegetation areas was found to be lower than that for cloud-free pixels. Moreover, our results suggest that the saturation of shadow pixels increases with increasing percentage of vegetation cover [Figs. 7(a), 7(c), and 7(d)]. For land cover classes including vegetation, urban, and nonvegetation, the saturation of cloud-free pixels is typically greater than that of cloud and shadow pixels. Based on the above analysis, we suggest that the maximum saturation can act as a good cloud-free pixel index for water and nonvegetation areas.

Graphic Jump LocationF7 :

A comparison of the saturation values for cloud-free, shadow, and cloud conditions for the following land cover types: (a) urban, (b) water, (c) vegetation (Veg), and (d) nonvegetation (NoV). C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples are the same as those shown in Fig. 6.

The algorithm proposed here for compositing cloud-free images is similar to that of MOD09A1, in which both select cloud-free pixels from MODIS daily images and combine these pixels. We investigated samples A, B, and C using both the MOD09A1 method and our compositing technique both qualitatively and quantitatively through visual evaluation and analysis of indicators.

Visual Evaluation

Based on visual evaluation, our proposed algorithm performs better in the selection of cloud-free pixels over water (including marine, coastal, lake, reservoir, and river environments) than the MOD09A1 product (Fig. 8). For example, MOD09A1 erroneously defined a cloud pixel as water under the clear-sky conditions [red circles in Figs. 8(a1) and 8(a2)]. Thus, our algorithm performs better than MOD09A1 in such cloud regions, although our method is less spatially consistent than MOD09A1 [e.g., the rectangle for sample B in Figs. 8(b1) and 8(b2)]. Moreover, for sample C [Figs. 8(c1) and 8(c2)], our algorithm removed the cloud in the lower left of the image, whereas MOD09A1 did not. Finally, the ocean color in the cloud-free images obtained using our method is more spatially homogeneous than that obtained from the MOD09A1 product.

Graphic Jump LocationF8 :

MOD09A1 and cloud-free images composited by the integrated method for samples A, B, and C: (a1) MOD09A1 for sample A; (a2) cloud-free image composited by the integrated method for sample A; (b1) MOD09A1 for sample B; (b2) cloud-free image composited by the integrated method for sample B; (c1) MOD09A1 for sample C; and (c2) cloud-free image composited by the integrated method for sample C.

Indicator Evaluation

Residual cloud, cloud shadow, and mixed pixels can be found in the composite image, and quantitative evaluation of the relative proportions of these residual pixels is essential. The 1-km State QA Descriptions (16-bit) dataset of the MOD09GA product includes 11 indicators, including five quantitative indicators that are useful for the estimation of the quality of image composition: cloud state, internal cloud, cloud shadow, cirrus detected, and pixels adjacent to cloud.

In the original dataset of daily images, each pixel includes three classes: every day is cloudy; every day is noncloudy; there are at least one cloudy day and at most seven cloudy days throughout the 8-day period. For this third class, the accuracy of the cloud-free composited image has been evaluated previously according to Eq. (12). Display Formula

Pi=Mi/Mclear_oud_i,(12)
where i is a state of cloud-free, cloud, cloud shadow, cirrus, and so on, Mi represents the pixel number of a given state remaining in the cloud-free image, and Mclear_oud_i represents the number of pixels exhibiting the state in at least one day and at most seven days throughout the 8-day period. Based on Eq. (12), we conducted a comparative evaluation of the quality of the images produced using our compositing technique (Table 3).

Table Grahic Jump Location
Table 3A comparative evaluation of the quality of our composited cloud-free images (units: %).

It is clear that our method performs well in removing cloud and cirrus pixels, but less well for cloud-shadow pixels. In the composite image, the internal cloud in the coastal areas of southeastern China (sample C) is lower than that in the MOD09A1 product, indicating that fewer residual cloud pixels remain in the composite image than in the MOD09A1 product. Moreover, slightly less high cirrus can be found in the composite image than in the MOD09A1 product, although there is slightly more medium height and small cirrus in the composited image. However, residual small cirrus is less than 5% in samples A and B, and that in sample C is 7.15%. More residual cloud shadow pixels are present in the composite image than in the MOD09A1 product with residual cloud shadow greater than 30% in the Northwest Territories (sample B) and the coastal areas of southeastern China (sample C).

Table 4 presents average values for sensor zenith angle, scattering angle, and relative azimuth angle for samples A, B, and C for both the composite image and the MOD09A1 product. It is clear that, regardless of whether the B1/7 ratio or the maximum saturation method is adopted, the average sensor zenith angle is greater for the composite image than for the MOD09A1 product. This occurs because our method exhibits preference for nonnadir pixels, whereas the MOD09A1 product exhibits preference for nadir pixels. Furthermore, the average scatter angle for the composite image is less than that for the MOD09A1 product, and the average scatter angle obtained according to the maximum saturation method is less than that obtained using the B1/7 ratio method. This may have occurred because our method mistakes cloud-shadow pixels for cloud-free pixels, particularly when the maximum saturation method is adopted. The average relative azimuth angle for the composite image is approximately 90 to 270 deg, whereas that of the MOD09A1 product is less than 45 deg or greater than 315 deg. This likely occurs because our method preferentially selects the forward hemisphere pixels as cloud-free pixels, whereas the MOD09A1 product preferentially selects backward hemisphere observations pixels as cloud-free pixels. Thus, it is clear that our method exhibits obvious differences from the MOD09A1 algorithm.

Table Grahic Jump Location
Table 4A comparative analysis of scattering angle, sensor zenith angle, and relative azimuth angle (units: deg).

The selection of the maximum saturation pixel as a cloud-free pixel is likely the key factor causing the differences that are apparent from Table 4. The atmosphere has a difference to the extinction coefficient of the red, green, and blue, the longer the light transmission path in the atmosphere, the greater the energy difference among the red, green, and blue, and then the greater the saturation. Therefore, pixels exhibiting large zenith angles are likely to be selected as cloud-free pixels, although the maximum saturation method is more likely to preferentially select such pixels than the B1/7 method. Thus, when backward (i.e., back to the sun) observations are used, the transmission distance for reflected sunlight is short, and the pixel saturation is low. Conversely, when forward (i.e., toward the sun) observations are used, the transmission distance for scattered sunlight is long, and the pixel saturation is high. Therefore, our method preferentially selects forward observation pixels as cloud-free pixels. The incident light of cloud-free pixels is primarily reflected sunlight, whereas that of shadow pixels is primarily light scattered by the surrounding objects. Therefore, the saturation of cloud-shadow pixel is greater than that of cloud-free pixel, and then shadow pixels can be more easily mistaken for cloud-free pixels.

To verify whether our results have been affected by land-use type, we selected two indicators of internal cloud and cloud shadow and used Eq. (12) to calculate Pi for residual cloud and shadow for the composited cloud-free image for the water, vegetation, urban, snow, and nonvegetation land cover categories (Table 5). For this, we used land cover data from the MCD12Q1 type 5 dataset. Water bodies, nonvegetated areas, urban and built-up areas, and snow and ice were extracted directly from the dataset; the vegetation category was obtained by combining the evergreen needleleaf trees, evergreen broadleaf trees, deciduous needleleaf trees, and deciduous broadleaf trees classes.

Table Grahic Jump Location
Table 5Pi analysis for residual cloud and shadow in the composited cloud-free images for different land cover types.

It is clear that, in the cloud-free images from North China (sample A) and the Northwest Territories (sample B), the percentage of residual cloud for the vegetation, built-up, bare ground, and other land use types is small but still greater than corresponding percentages for the MOD09A1 product. In the cloud-free image for the Northwest Territories (sample B), the percentage of residual cloud pixels in snow areas was found to be 43.29% (27.3%) using our method (the MOD09A1 product). Thus, it is clear that the MOD09A1 product performs significantly better than our method in this respect. In the cloud-free image of the coastal areas of southeastern China (sample C), considerably more residual cloud pixels remain over water than in the MOD09A1 product. Moreover, in the Northwest Territories (sample B) and coastal areas of southeastern China (sample C), the number of residual cloud shadow pixels is relatively high with percentages greater than 20%. In the Northwest Territories (sample B), the composited image was produced primarily using the B1/7 method; here, Pi of the vegetation were found to be significantly lower than those of water, indicating that the B1/7 method performs better than the saturation method in removing cloud shadow.

Our newly developed integrated cloud-free image compositing algorithm requires only bands 1, 3, 4, and 7 of MODIS images, all of which are readily available, and the algorithm complexity is smaller than MOD09A1. The B1/7 method is suitable for extracting cloud-free pixels for land areas, whereas the saturation method is suitable for extracting cloud-free pixels for marine environments, coastal areas, and other water bodies. Our proposed method can eliminate cloud effects and produces slightly higher values for the percentage area ratio of residual cloud pixels and pixels adjacent to cloud in cloud-free images than those in the MOD09A1 product. Moreover, our proposed method performs well in terms of removing cirrus, particularly high cirrus. The calculation accuracy of our method depends on land cover type: the algorithm performs best in the removal of cloud and shadow for vegetation, followed by urban areas, bare land, and water, but performs relatively poorly for snow and ice. Conversely, the MOD09A1 product exhibits no such variation in performance with land cover type. Moreover, our algorithm preferentially selects cloud-free image pixels with larger zenith angles and smaller scattering angles and prefers forward observations to backward observations, which is in stark contrast to the MOD09A1 product. However, our method has some limitations in compositing cloud-free images. For example, the method can only be used to process multiday images obtained in daytime; it also produces more residual cloud pixels than the MOD09A1 product, although it performs similarly to the MOD09A1 product in terms of removing cloud and shadow in ice and snow areas. These limitations need further improvements.

.This work was supported by the National Natural Science Foundation of China (Nos. 41240004, 40871073, and 40971118), the National High Technology Research and Development Program (2009AA12Z225), Key Basic Research Projects of Hebei Province (No. 08966712D), the Natural Science Foundation of Hebei Province (No. D2009000299), and the Doctor Fund projects of Hebei Normal University (No. 2011B03). MODIS data were downloaded from the Warehouse Inventory Search Tool (WIST) and the BaseMap of Fig. 1 was from ArcGIS Online for free. The authors wish to express their thanks to USGS and ESRI company for the free data, Chen Hui (professor) and the staff of the Institute of Remote Sensing and Digital Earth of the Chinese Academy of Sciences for fruitful discussions. The authors also wish to express their thanks to the anonymous reviewers for their assistance in refining the manuscript.

Stowe  L. L., Vemury  S. K., Rao  A. V., “AVHRR clear-sky radiation data sets at NOAA/NESDIS,” Adv. Space Res.. 14, (1 ), 113 –116 (1994), CrossRef. 0273-1177 
Yang  C. et al., “Study on cloud detection and cloud phase retrieval in Changchun area,” Remote Sens. Technol. Appl.. 25, (2 ), 245 –250 (2010).
He  Q.-J., “A daytime cloud detection algorithm for FY-3A/VIRR data,” Int. J. Remote Sens.. 32, (21 ), 6811 –6822 (2011), CrossRef. 0143-1161 
Ma  Z.-H. et al., “Cloud removing from MODIS based on spectrum analysis,” Remote Sens. Inf.. (4 ), 3 –8 (2009).
Poli  G. et al., “Dynamic threshold cloud detection algorithm improvement for AVHRR and SEVIRI images,” in  2010 IEEE Int. Geoscience and Remote Sensing Symposium , pp. 4146 –4149,  IEEE ,  New York  (2010).
Anzalone  A., Isgro  F., Tegolo  D., “Combining fuzzy c-mean and normalized convolution for cloud detection in IR images,” in Fuzzy Logic and Applications. , pp. 140 –147,  Springer-Verlag ,  Berlin, Heidelberger  (2009).
Aires  F. et al., “A land and ocean microwave cloud classification algorithm derived from AMSU-A and -B, trained using MSG-SEVIRI infrared and visible observations,” Mon. Weather Rev.. 139, (8 ), 2347 –2366 (2011), CrossRef. 0027-0644 
Qiu  D. et al., “The study on removal of satellite image shadow based on SAM and MASK,” Remote Sens. Inf.. (5 ), 12  (2010).
Luo  Y., Trishchenko  A. P., Khlopenkov  K. V., “Developing clear-sky, cloud and cloud shadow mask for producing clear-sky composites at 250-meter spatial resolution for the seven MODIS land bands over Canada and North America,” Remote Sens. Environ.. 112, (12 ), 4167 –4185 (2008), CrossRef. 0034-4257 
Marais  I. V. Z., Du Preez  J. A., Steyn  W. H., “An optimal image transform for threshold-based cloud detection using heteroscedastic discriminant analysis,” Int. J. Remote Sens.. 32, (6 ), 1713 –1729 (2011), CrossRef. 0143-1161 
Mackie  S. et al., “Generalized Bayesian cloud detection for satellite imagery. Part 1: technique and validation for night-time imagery over land and sea,” Int. J. Remote Sens.. 31, (10 ), 2573 –2594 (2010), CrossRef. 0143-1161 
Mackie  S. et al., “Generalized Bayesian cloud detection for satellite imagery. Part 2: technique and validation for daytime imagery,” Int. J. Remote Sens.. 31, (10 ), 2595 –2621 (2010), CrossRef. 0143-1161 
Lyapustin  A., Wang  Y., Frey  R., “An automatic cloud mask algorithm based on time series of MODIS measurements,” J. Geophys. Res. Atmos.. 113, , D16207  (2008), CrossRef. 0148-0227 
Tang  H., “MODIS09 image time series anomaly detection methods based on statistical characteristics of spectrum time-varying,” in  Remote Sensing Quantitative Retrieve Algorithm Symposium ,  Beijing Normal University ,  Beijing  (2010).
Hagolle  O. et al., “A multi-temporal method for cloud detection, applied to FORMOSAT-2, VEN mu S, LANDSAT and SENTINEL-2 images,” Remote Sens. Environ.. 114, (8 ), 1747 –1755 (2010), CrossRef. 0034-4257 
Fraser  A. D., Massom  R. A., Michael  K. J., “A method for compositing polar MODIS satellite images to remove cloud cover for landfast sea-ice detection,” IEEE Trans. Geosci. Remote Sens.. 47, (9 ), 3272 –3282 (2009), CrossRef. 0196-2892 
Rossow  W. B., Garder  L. C., Lacis  A. A., “Global, seasonal cloud variations from satellite radiance measurements. Part I: sensitivity of analysis,” J. Clim.. 2, (5 ), 419 –458 (1989), CrossRef. 0894-8755 
Rossow  W. B., Lacis  A. A., “Global, seasonal cloud variations from satellite radiance measurements. Part II: cloud properties and radiative effects,” J. Clim.. 3, (11 ), 1204 –1253 (1990), CrossRef. 0894-8755 
Kriebel  K. T., Saunders  R. W., Gesell  G., “Optical properties of clouds derived from fully cloudy AVHRR pixels,” Beitr. Phys. Atmos.. 62, (3 ), 165 –171 (1989).
Gesell  G., “An algorithm for snow and ice detection using AVHRR data: an extension to the APOLLO software package,” Int. J. Remote Sens.. 10, (4–5 ), 897 –905 (1989), CrossRef. 0143-1161 
Stowe  L. L. et al., “Global distribution of cloud cover derived from NOAA/AVHRR operational satellite data,” Adv. Space Res.. 11, (3 ), 51 –54 (1991), CrossRef. 0273-1177 
Ackerman  S. A. et al., “Cloud detection with MODIS. Part II: validation,” J. Atmos. Oceanic Technol.. 25, (7 ), 1073 –1086 (2008), CrossRef. 0739-0572 
Frey  R. A. et al., “Cloud detection with MODIS. Part I: improvements in the MODIS cloud mask for collection 5,” J. Atmos. Oceanic Technol.. 25, (7 ), 1057 –1072 (2008), CrossRef. 0739-0572 
Gafurov  A., Bárdossy  A., “Cloud removal methodology from MODIS snow cover product,” Hydrol. Earth Syst. Sci.. 13, (7 ), 1361 –1373 (2009), CrossRef. 1027-5606 
Dozier  J. et al., “Time–space continuity of daily maps of fractional snow cover and albedo from MODIS,” Adv. Water Resour.. 31, (11 ), 1515 –1526 (2008), CrossRef. 0309-1708 
Xia  Q. et al., “Estimation of daily cloud-free, snow-covered areas from MODIS based on variational interpolation,” Water Resour. Res.. 48, (9 ), W09523  (2012), CrossRef. 0043-1397 
Vermote  E. F., Kotchenova  S. Y., Ray  J. P., MODIS Surface Reflectance User’s Guide. , 2011, http://modis-sr.ltdri.org/products/MOD09_UserGuide_v1_3.pdf (22  April 2011).
Kaufman  Y. J., Tanre  D., “Atmospherically resistant vegetation index (ARVI) for EOS-MODIS,” IEEE Trans. Geosci. Remote Sens.. 30, (2 ), 261 –270 (1992), CrossRef. 0196-2892 
Vermote  E. F., Vermeulen  A., “Atmospheric Correction Algorithm Spectral Reflectances (MOD 09),” 1999, p. 26 , http://modis.gsfc.nasa.gov/data/atbd/atbd_mod08.pdf.
Kaufman  Y. J. et al., “The MODIS 2.1 μm channel: correlation with visible reflectance for use in remote sensing of aerosol,” IEEE Trans. Geosci. Remote Sens.. 35, (5 ), 1286 –1300 (1997), CrossRef. 0196-2892 
Levy  R. C. et al., “Second-generation operational algorithm: retrieval of aerosol properties over land from inversion of Moderate Resolution Imaging Spectroradiometer spectral reflectance,” J. Geophys. Res.. 112, , D13211  (2007), CrossRef. 0148-0227 
Xu  X., Remote Sensing Physics. ,  Peking University Press ,  Beijing  (2006).
Gonzalez  R. C., Woods  R. E., Digital Image Processing Second Edition. ,  Publishing House of Electronics Industry ,  Beijing  (2007).
Zhao  Y., The Principle and Application of Remote Sensing Application Analysis. ,  Science Press ,  Beijing  (2009)

Grahic Jump LocationImage not available.

Hai-bing Xiang is pursuing his PhD in the Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. He received his bachelor’s degree in 2008 and master’s degree in 2011 from the Department of Cartography and Geographic Information Systems in Hebei Normal University, Shijiazhuang, China. He had participated in a number of national and provincial Natural Science Foundation of research topics. The study includes population density of multiscale calculating method, detection and removal of the stripes in MODIS image, the algorithms on MODIS clear-sky image compositing, and the ground temperature retrieval algorithm. Currently, he is engaging in the research of forest structural parameters retrieval of remote sensing.

Grahic Jump LocationImage not available.

Jin-song Liu received his PhD degree in ecology from College of Life Science, Hebei Normal University, Shijiazhuang, China, in 2011. Since 2011, he has been a professor of GIS in College of Mathematics and Information Science, Hebei Normal University, Shijiazhuang, China. From 1995 to 2006, he worked as a teacher in Department of Geography, Hebei Normal University on GIS. Since 2007, he has worked in Hebei Key Laboratory of Mathematic Calculation and Application and paid more attention to remote sensing and population geography, especially in the applications of MODIS images and population geographic information system of Hebei province. Currently, he is managing director and secretary of Hebei Population Society, and he is also a reviewer of Journal of Remote Sensing. His major research interests include quantitative remote sensing inversion, population geography, and GIS.

Grahic Jump LocationImage not available.

Chun-xiang Cao received her MSc degree in Kyoto Prefectural University, Japan, in 1996, and received her DSc degree from Hiroshima University, Japan, in 2000. She is director at State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. Her research fields include environmental health assessment, remote sensing of forest health, and others. She has managed many major research projects, including National Natural Science Foundation of China, National “973” and National “863” projects, and so on. She has published 31 SCI articles and an academic monograph, “Diagnosis of environmental health by remote sensing,” in past five years.

Grahic Jump LocationImage not available.

Min Xu is an assistant research fellow at State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. He received his Dr. degree from Graduate School of the Chinese Academy of Sciences and his master’s degree from Wuhan University, China. He got “Triple-A” outstanding student in Graduate School of the Chinese Academy of Sciences in 2009. His current research fields involve application of RS and GIS technologies in public health and published 36 research papers including 16 SCI indexed paper.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Hai-bing Xiang ; Jin-song Liu ; Chun-xiang Cao and Min Xu
"Algorithms for Moderate Resolution Imaging Spectroradiometer cloud-free image compositing", J. Appl. Remote Sens. 7(1), 073486 (Nov 05, 2013). ; http://dx.doi.org/10.1117/1.JRS.7.073486


Figures

Graphic Jump LocationF1 :

A model map of cloud-free, cloud shadow, and cloud pixels.

Graphic Jump LocationF2 :

A flow chart illustrating the application of the integrated compositing algorithm.

Graphic Jump LocationF3 :

A map of the study area. The BaseMap is from ArcGIS Online.

Graphic Jump LocationF4 :

The land-cover types for sample A.

Graphic Jump LocationF5 :

A scatter map for B1 and B7: (a) cloud-free pixels used for cloud comparison, (b) cloud pixels, (c) cloud-free pixels used for shadow comparison, and (d) cloud-shadow pixels. C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples for (a) and (b) and for (c) and (d) were obtained from different daily images of the same pixels.

Graphic Jump LocationF6 :

A comparison of the B1/7 values for cloud-free, cloud shadow, and cloud conditions for the following land cover types: (a) urban, (b) water, (c) vegetation (Veg), (d) nonvegetation (NoV). C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples for (a) and (b) and for (c) and (d) were obtained from different daily images of the same pixels.

Graphic Jump LocationF7 :

A comparison of the saturation values for cloud-free, shadow, and cloud conditions for the following land cover types: (a) urban, (b) water, (c) vegetation (Veg), and (d) nonvegetation (NoV). C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples are the same as those shown in Fig. 6.

Graphic Jump LocationF8 :

MOD09A1 and cloud-free images composited by the integrated method for samples A, B, and C: (a1) MOD09A1 for sample A; (a2) cloud-free image composited by the integrated method for sample A; (b1) MOD09A1 for sample B; (b2) cloud-free image composited by the integrated method for sample B; (c1) MOD09A1 for sample C; and (c2) cloud-free image composited by the integrated method for sample C.

Tables

Table Grahic Jump Location
Table 1MODIS data list.
Table Grahic Jump Location
Table 2Slopes of the regression equation (intercept=0) for different land cover types under different cloud cover conditions.
Table Grahic Jump Location
Table 3A comparative evaluation of the quality of our composited cloud-free images (units: %).
Table Grahic Jump Location
Table 4A comparative analysis of scattering angle, sensor zenith angle, and relative azimuth angle (units: deg).
Table Grahic Jump Location
Table 5Pi analysis for residual cloud and shadow in the composited cloud-free images for different land cover types.

References

Stowe  L. L., Vemury  S. K., Rao  A. V., “AVHRR clear-sky radiation data sets at NOAA/NESDIS,” Adv. Space Res.. 14, (1 ), 113 –116 (1994), CrossRef. 0273-1177 
Yang  C. et al., “Study on cloud detection and cloud phase retrieval in Changchun area,” Remote Sens. Technol. Appl.. 25, (2 ), 245 –250 (2010).
He  Q.-J., “A daytime cloud detection algorithm for FY-3A/VIRR data,” Int. J. Remote Sens.. 32, (21 ), 6811 –6822 (2011), CrossRef. 0143-1161 
Ma  Z.-H. et al., “Cloud removing from MODIS based on spectrum analysis,” Remote Sens. Inf.. (4 ), 3 –8 (2009).
Poli  G. et al., “Dynamic threshold cloud detection algorithm improvement for AVHRR and SEVIRI images,” in  2010 IEEE Int. Geoscience and Remote Sensing Symposium , pp. 4146 –4149,  IEEE ,  New York  (2010).
Anzalone  A., Isgro  F., Tegolo  D., “Combining fuzzy c-mean and normalized convolution for cloud detection in IR images,” in Fuzzy Logic and Applications. , pp. 140 –147,  Springer-Verlag ,  Berlin, Heidelberger  (2009).
Aires  F. et al., “A land and ocean microwave cloud classification algorithm derived from AMSU-A and -B, trained using MSG-SEVIRI infrared and visible observations,” Mon. Weather Rev.. 139, (8 ), 2347 –2366 (2011), CrossRef. 0027-0644 
Qiu  D. et al., “The study on removal of satellite image shadow based on SAM and MASK,” Remote Sens. Inf.. (5 ), 12  (2010).
Luo  Y., Trishchenko  A. P., Khlopenkov  K. V., “Developing clear-sky, cloud and cloud shadow mask for producing clear-sky composites at 250-meter spatial resolution for the seven MODIS land bands over Canada and North America,” Remote Sens. Environ.. 112, (12 ), 4167 –4185 (2008), CrossRef. 0034-4257 
Marais  I. V. Z., Du Preez  J. A., Steyn  W. H., “An optimal image transform for threshold-based cloud detection using heteroscedastic discriminant analysis,” Int. J. Remote Sens.. 32, (6 ), 1713 –1729 (2011), CrossRef. 0143-1161 
Mackie  S. et al., “Generalized Bayesian cloud detection for satellite imagery. Part 1: technique and validation for night-time imagery over land and sea,” Int. J. Remote Sens.. 31, (10 ), 2573 –2594 (2010), CrossRef. 0143-1161 
Mackie  S. et al., “Generalized Bayesian cloud detection for satellite imagery. Part 2: technique and validation for daytime imagery,” Int. J. Remote Sens.. 31, (10 ), 2595 –2621 (2010), CrossRef. 0143-1161 
Lyapustin  A., Wang  Y., Frey  R., “An automatic cloud mask algorithm based on time series of MODIS measurements,” J. Geophys. Res. Atmos.. 113, , D16207  (2008), CrossRef. 0148-0227 
Tang  H., “MODIS09 image time series anomaly detection methods based on statistical characteristics of spectrum time-varying,” in  Remote Sensing Quantitative Retrieve Algorithm Symposium ,  Beijing Normal University ,  Beijing  (2010).
Hagolle  O. et al., “A multi-temporal method for cloud detection, applied to FORMOSAT-2, VEN mu S, LANDSAT and SENTINEL-2 images,” Remote Sens. Environ.. 114, (8 ), 1747 –1755 (2010), CrossRef. 0034-4257 
Fraser  A. D., Massom  R. A., Michael  K. J., “A method for compositing polar MODIS satellite images to remove cloud cover for landfast sea-ice detection,” IEEE Trans. Geosci. Remote Sens.. 47, (9 ), 3272 –3282 (2009), CrossRef. 0196-2892 
Rossow  W. B., Garder  L. C., Lacis  A. A., “Global, seasonal cloud variations from satellite radiance measurements. Part I: sensitivity of analysis,” J. Clim.. 2, (5 ), 419 –458 (1989), CrossRef. 0894-8755 
Rossow  W. B., Lacis  A. A., “Global, seasonal cloud variations from satellite radiance measurements. Part II: cloud properties and radiative effects,” J. Clim.. 3, (11 ), 1204 –1253 (1990), CrossRef. 0894-8755 
Kriebel  K. T., Saunders  R. W., Gesell  G., “Optical properties of clouds derived from fully cloudy AVHRR pixels,” Beitr. Phys. Atmos.. 62, (3 ), 165 –171 (1989).
Gesell  G., “An algorithm for snow and ice detection using AVHRR data: an extension to the APOLLO software package,” Int. J. Remote Sens.. 10, (4–5 ), 897 –905 (1989), CrossRef. 0143-1161 
Stowe  L. L. et al., “Global distribution of cloud cover derived from NOAA/AVHRR operational satellite data,” Adv. Space Res.. 11, (3 ), 51 –54 (1991), CrossRef. 0273-1177 
Ackerman  S. A. et al., “Cloud detection with MODIS. Part II: validation,” J. Atmos. Oceanic Technol.. 25, (7 ), 1073 –1086 (2008), CrossRef. 0739-0572 
Frey  R. A. et al., “Cloud detection with MODIS. Part I: improvements in the MODIS cloud mask for collection 5,” J. Atmos. Oceanic Technol.. 25, (7 ), 1057 –1072 (2008), CrossRef. 0739-0572 
Gafurov  A., Bárdossy  A., “Cloud removal methodology from MODIS snow cover product,” Hydrol. Earth Syst. Sci.. 13, (7 ), 1361 –1373 (2009), CrossRef. 1027-5606 
Dozier  J. et al., “Time–space continuity of daily maps of fractional snow cover and albedo from MODIS,” Adv. Water Resour.. 31, (11 ), 1515 –1526 (2008), CrossRef. 0309-1708 
Xia  Q. et al., “Estimation of daily cloud-free, snow-covered areas from MODIS based on variational interpolation,” Water Resour. Res.. 48, (9 ), W09523  (2012), CrossRef. 0043-1397 
Vermote  E. F., Kotchenova  S. Y., Ray  J. P., MODIS Surface Reflectance User’s Guide. , 2011, http://modis-sr.ltdri.org/products/MOD09_UserGuide_v1_3.pdf (22  April 2011).
Kaufman  Y. J., Tanre  D., “Atmospherically resistant vegetation index (ARVI) for EOS-MODIS,” IEEE Trans. Geosci. Remote Sens.. 30, (2 ), 261 –270 (1992), CrossRef. 0196-2892 
Vermote  E. F., Vermeulen  A., “Atmospheric Correction Algorithm Spectral Reflectances (MOD 09),” 1999, p. 26 , http://modis.gsfc.nasa.gov/data/atbd/atbd_mod08.pdf.
Kaufman  Y. J. et al., “The MODIS 2.1 μm channel: correlation with visible reflectance for use in remote sensing of aerosol,” IEEE Trans. Geosci. Remote Sens.. 35, (5 ), 1286 –1300 (1997), CrossRef. 0196-2892 
Levy  R. C. et al., “Second-generation operational algorithm: retrieval of aerosol properties over land from inversion of Moderate Resolution Imaging Spectroradiometer spectral reflectance,” J. Geophys. Res.. 112, , D13211  (2007), CrossRef. 0148-0227 
Xu  X., Remote Sensing Physics. ,  Peking University Press ,  Beijing  (2006).
Gonzalez  R. C., Woods  R. E., Digital Image Processing Second Edition. ,  Publishing House of Electronics Industry ,  Beijing  (2007).
Zhao  Y., The Principle and Application of Remote Sensing Application Analysis. ,  Science Press ,  Beijing  (2009)

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.