Open Access
11 January 2021 Nonlocal weighted sparse unmixing based on global search and parallel optimization
Yongxin Li, Wenxing Bao, Kewen Qu, Xiangfei Shen
Author Affiliations +
Abstract

Sparse unmixing (SU) can represent an observed image using pure spectral signatures and corresponding fractional abundance from a large spectral library and is an important technique in hyperspectral unmixing. However, the existing SU algorithms mainly exploit spatial information from a fixed neighborhood system, which is not sufficient. To solve this problem, we propose a nonlocal weighted SU algorithm based on global search (G-NLWSU). By exploring the nonlocal similarity of the hyperspectral image, the weights of pixels are calculated to form a matrix to weight the abundance matrix. Specifically, G-NLWSU first searches for a similar group of each pixel in the global scope then uses singular value decomposition to denoise and finally obtains the weight matrix by considering correlations between similar pixels. To reduce the execution burden of G-NLWSU, we propose a parallel computing version of G-NLWSU, named PG-NLWSU, which integrates compute unified device architecture-based parallel computing into G-NLWSU to accelerate the search for groups of nonlocally similar pixels. Our proposed algorithms shed new light on SU by considering a new exploitation process of spatial information and parallel computing scenario. Experimental results conducted on simulated and real datasets show that PG-NLWSU is superior to comparison algorithms.

1.

Introduction

The rapid development of hyperspectral remote sensing technology has resulted in its wide use, such as in geological prospecting, target detection, and environmental surveillance.1 However, owing to the low spatial resolution of the sensor, the spatial complexity will lead to the existence of mixed pixels in a hyperspectral image (HSI), i.e., different substances form a pixel.2 Linear spectral unmixing, also known as a linear mixing model (LMM), is a standard method to solve the problem of mixed pixels, which assumes that the observed spectral signal can be expressed linearly through a set of pure spectral signatures and their corresponding abundance weights.3

LMMs can be categorized as either unsupervised- or semisupervised-based unmixing algorithms. Unsupervised-based unmixing methods require endmember extraction and abundance estimation steps,4,5 which can be classified as geometric-,618 statistic-,1924 or spatial-information-based.2529 Geometric-based algorithms usually assume that desired pure pixels exist in an image, but they usually fail if the image mixing degree is high. Statistic-based algorithms can elegantly solve problems of a high mixing degree, but they only consider spectral information and ignore correlations between pixels. Spatial-information-based algorithms fuse spatial and spectral information for endmember extraction and have attracted huge attention.

Compared with the unsupervised methods, semisupervised-based unmixing algorithms, also known as sparse unmixing (SU) algorithms, solve the spectral unmixing problem by introducing a large public spectral library. It assumes that the observed spectral signal can be represented by a linear combination of a few spectral signatures in the spectral library. Therefore, the SU problem is equivalent to searching for an optimal subset from the spectral library to simulate mixed pixels in the scene.30 However, the number of spectra in the library is much larger than the number of endmembers, implying that an abundance matrix usually has only a few nonzero entries, i.e., it is sparse. Bioucas-Dias and Figueiredo31 proposed a variable splitting and augmented Lagrangian-based sparse unmixing algorithm (SUnSAL), which uses an alternating direction multiplier method (ADMM) to optimize this constrained sparse regression problem. To enhance the sparsity of a solution, Iordache et al.32 proposed a new SUnSAL extension by considering collaborative sparsity, i.e., the 2,1 norm, to simultaneously boost row sparseness of the abundance. Sun et al.33 proposed an p(0<p<1) norm to be used as a new sparsity regularizer. Cheng et al.34 introduced weighted 1 norm minimization to penalize nonzero sparse terms in sparse solutions.

HSI not only contains a large amount of spectral information but also has rich spatial information. Iordache et al.35 proposed an SU algorithm with variable splitting augmentation Lagrangian and total variation (SUnSAL-TV), where the total variation (TV) regularizer explores spatial neighborhood information. When the TV regularizer is only used, it is called nonnegative least squares TV (NCLS-TV). Zhang et al.36 proposed a local collaborative SU that combines neighborhood weights with cooperative sparse regression to achieve better unmixing results than SUnSAL-TV. Based on the weighted 1 strategy, Wang et al.37,38 proposed a double weighted SU algorithm to enhance the sparsity of abundances from the spectral and spatial domains, and a variant with TV regularization. Wang et al.39 introduced graph regularization to improve unmixing performance. Zhang et al.40 proposed a spectraspatial weighted SU model using the neighborhood information of the image in a weight matrix. Zhong et al.41 proposed a nonlocal TV regularizer for sparse unmixing (NLSU). Feng et al.42 improved NLSU and extracted fixed spatial information from the original image.

However, the unmixing algorithm that combines spatial information still has two problems: (1) the global spatial information is not considered. Local and nonlocal-based methods find similar pixels from the neighborhood and large local blocks, respectively; (2) the spatial regularization term makes the model more complicated. In this paper, we propose a nonlocal weighted SU model, PG-NLWSU, based on global search and parallel acceleration. Inspired by a nonlocal mean algorithm, PG-NLWSU first searches similar blocks of all pixels from the whole image and uses singular value decomposition (SVD) to denoise them. The explored nonlocal information is used to weight an abundance matrix with the 1-norm to generate a solution with stronger sparsity. To search similar blocks is generally time consuming. PG-NLWSU uses parallel computing to enhance its performance. We make three primary contributions:

  • 1. We propose a global search strategy to exploit nonlocal similar blocks from whole images.

  • 2. The exploited nonlocal information is used as a weight matrix to penalize the abundance matrix with the 1 norm.

  • 3. Parallel computing is applied to spatial information extraction to improve the algorithm’s efficiency.

Several experiments conducted on synthetic and real hyperspectral datasets indicate that PG-NLWSU significantly improves upon current SU algorithms.

The rest of this article is organized as follows. Section 2 describes SU theory. Section 3 discusses the PG-NLWSU algorithm. Section 4 details experiments on simulated and real datasets. Section 5 discusses the advantages and disadvantages of the algorithm and relates directions of future research.

2.

Sparse Unmixing

2.1.

LMM

The LMM assumes that the mixed pixels in an HSI can be expressed as a linear combination of the endmember spectrum and their corresponding abundances, given as

Eq. (1)

y=Mα+n,
where yRL×1, MRL×q, αRq×1, and nRL×1 denote the observed mixed pixel spectrum, endmember spectral vectors, abundance vector, and error term, respectively. α generally has two constraints: abundance non-negative constraint (ANC), i.e., α0, and the abundance sum-to-one constraint (ASC), i.e., 1Tα=1.

2.2.

Sparse Unmixing

SU introduces a large spectral library to the LMM, given by

Eq. (2)

Y=AX+N,
where YRL×p is an HSI with L spectral bands and P pixels, ARL×M is a spectral library with M endmembers, XRM×p is the fractional abundance matrix corresponding to all endmembers in the spectral library, and NRL×M is the error that affects each pixel. Since an HSI is usually composed of only a small number of endmember spectral, the fractional abundance matrix X is sparse, which can minimize an SU model and regularize the abundance matrix X with a sparsity regularizer

Eq. (3)

minX12AXYF2+λX0s.t.  X0,
where ·F is the F norm that represents the error between the measured value and real data, ·0 is an 0 norm sparsity constraint, and λ is a regularization parameter. It is worth noting that owing to the existence of spectral variability in hyperspectral data, only the ANC is normally considered, and the ASC is not considered. In addition, because Eq. (3) is nonconvex, which is hard to solve, the 1 norm sparsity constraint

Eq. (4)

minX12AXYF2+λX1s.t.  X0,
is taken into account.

To improve the sparsity of the 1 norm, a weighted 1 norm is proposed:43

Eq. (5)

minX12AXYF2+λWX1s.t.  X0,
where represents the point multiplication operation, WRM×p is obtained as

Eq. (6)

Wijk+1=1|Xijk|+ε,
where ε is an extremely small constant and Xijt is the value of the (i,j)’th element of the abundance matrix X in the t’th iteration.

3.

Nonlocal Weighted Sparse Unmixing Method Based on Global Search

3.1.

Nonlocal Similarity Block Based on Global Search

Nonlocal information, as important spatial information, is widely used in HSI classification,44,45 denoising,46 and unmixing.47 However, nonlocal-based methods tend to use a local search box strategy to extract nonlocal similar pixels,4447 which requires a large number of local search boxes. Although this search strategy improves execution efficiency, it fails to find many similar pixels that may be far from a fixed search area. We use a global search strategy to extract similar pixels, which can exploit spatial information well. In particular, abundance images and HSIs have similar spatial characteristics.48 As shown in Fig. 1, the following operations are performed on the abundance image:

  • 1. X, for the 8-neighborhood block Sx centered on x, j neighborhood blocks S1Sj with the highest similarity were searched in the whole image, with similarity discrimination method

    Eq. (7)

    dist(x,y)=(SxSy)g,
    where Sx and Sy represent the neighborhood block centered on x and y, respectively, g is a Gaussian kernel function, and denotes the convolution operation.

  • 2. Extract the center pixels of j similar blocks and reorganize them into a similar pixels group Px=[s1sj]T and then perform a low-rank noise reduction on Px to remove noises

    Eq. (8)

    Px=svd(Px),
    where svd(·) is the SVD function, and the following operations are performed:

    Eq. (9)

    [U,Σ,VT]=svd(Px),
    where U denotes a left singular vector matrix, Σ represents the diagonal matrix, where its the diagonal elements are the singular values in descending order, and V denotes a right singular vector matrix. The largest k singular values in Σ and their corresponding left and right singular vectors are selected for reconstruction of Px.

  • 3. Perform a weighted summation on Px to obtain the nonlocal similarity value NLS(x) corresponding to x

    Eq. (10)

    W(i)=|xsi|2/h,
    where the si are the center pixels of the i’th similar blocks of the pixel x, and h is a smoothing parameter:

    Eq. (11)

    NLS(x)=i=1jW(i)×Px(i,:),
    where Px(i,:) is the i’th similar pixel of x, W(i) is the weight corresponding to the i’th similar pixel.

  • 4. Perform the above three steps for all pixels to get the final nonlocal similarity matrix NLS(X).

Fig. 1

Flowchart of G-NLWSU.

JARS_15_1_016501_f001.png

3.2.

Nonlocal Weighted Sparse Unmixing Method

The proposed nonlocal weighted algorithm, G-NLWSU, is described mathematically as

Eq. (12)

minX12AXYF2+λWNLSX1s.t.  X0,
with weighting matrix

Eq. (13)

WNLS=1|NLS(Xijk)|+ε,
where NLS(·) is a nonlocal similarity matrix extracted from the abundance matrix in each iteration, which is calculated by Eq. (11).

Equation (12) can be expressed as

Eq. (14)

minX12AXYF2+λWNLSX1+ιR+(X),
where ιR+(X)=i=1nιR+(Xi) is an index function, Xi is the i’th column of X, and if Xi is nonnegative, then ιR+(Xi)=0, and otherwise ιR+(Xi)=+.

The equivalent constrained formula for Eq. (14) is

Eq. (15)

minX12V1YF2+λWNLSV21+ιR+(V3)s.t.  V1=AU,V2=U,V3=U.

Algorithm 1 shows the algorithmic pseudocode.

Algorithm 1

Pseudocode of G-NLWSU algorithm.

1: Initialization:
2: Set k, t=0, λ, μ, ε>0, U0, V10, V20, V30, D10, D20, D30
3: Repeat:
4: WNLSk=1|NLS(Xijk)|+ε(13)
5: Repeat:
6:   Ut+1(ATA+2I)1[AT(V1t+D1t)+(V2t+D2t)+(V3t+D3t)](25)
7:   V1t+111+μ[Y+μ(AUtD1t)](27)
8:   V2t+1soft(Ut+1D2t,λμWNLSk)(29)
9:   V3t+1max(Ut+1D3t,0)(32)
10: Update LaGrange multipliers:
11:  D1t+1D1tAUt+1+V1t+1
12:  D2t+1D2tUt+1+V2t+1
13:  D3t+1D3tUt+1+V3t+1
14: Update iteration:
15:  kk+1
16:  tt+1
17: until some stopping criterion is satisfied.

3.3.

Parallel Computing-Based G-NLWSU

HSI data have extremely high dimensions and complex topographic features, which means they require much computational time to perform an unmixing task. With recent hardware developments, graphics processing units (GPUs) have been used with large-scale data in parallel computing strategies, such as to improve computational efficiency in hyperspectral classification.49,50

We introduce a parallel computing-based G-NLWSU method, PG-NLWSU. The computing strategy considers the Compute Unified Device Architecture (CUDA) platform, a general-purpose computing architecture developed by NVIDIA that uses the parallel computing power of a GPU. CUDA controls the GPU to perform calculations through kernel functions. Since its layered structure (grid, block, and thread) is similar to that of three-dimensional images, it can be intuitively applied to the field of images. We use a hybrid programming strategy based on MATLAB and CUDA to accelerate the algorithm.

The data interaction between the host (multicore central processing units, CPUs) and device (GPUs) usually consumes much computational time and storage space. To alleviate these problems, we minimize the number of data transfers between the host and device, load some important calculation tasks to the GPU, return the results to the CPU, and use space preallocation or memory reduction to reduce memory consumption. After finishing calculations in the GPU, the occupied memory is released.

The complexity of WNLS in each iteration is O(MPS), where S is the number of sub-blocks of the image. To improve efficiency, we optimize the algorithm as follows:

  • 1. The data of the matrix U are transferred to the GPU before the outer loop, and the nonlocal similar block operation is performed by the NLS_kernel function.

  • 2. A grid, composed of thread, blocks, and threads in stages, is configured in the kernel function. This is shown in Fig. 2, where threadnum=[16,16], blocknum=[width+161width,height+161height].

Fig. 2

Grid, blocks, and threads.

JARS_15_1_016501_f002.png

The pseudocode of PG-NLWSU can be found in Algorithm 2.

Algorithm 2

Pseudocode of PG-NLWSU algorithm

1: Initialization:
2:  Set k, t=0, λ, μ, ε>0, U0, V10, V20, V30, D10, D20, D30
3: Repeat:
4:  Step 1. Transfer data U from host to device.
5:  Step 2. Execute NLS_kernel to calculate Px.
6:  Step 3. Transfer Px from device to host.
7:  Step 4. Compute Px=svd(Px) on CPU.
8:  Step 5. Compute WNLSk=1|NLS(Xijk)|+ε on CPU.
9: Repeat:
10:  Step 6. Compute Ut+1, V1t+1, V2t+1, V3t+1, D1t+1, D2t+1, D3t+1 on CPU.
11:  Step 7. Update k, t.
12: until some stopping criterion is satisfied.

3.4.

Computational Complexity Analysis

The experiment in this article was performed using MATLAB R2018a and CUDA v10.2, a GTX1060 GPU, and 16 GB of memory.

For G-NLWSU, the most expensive computational step is WNLS, which has order of complexity O(MPS), where P is the total number of pixels of the image and S is the number of sub-blocks. For U, let p1=ATA+2I, p2=p11, p3=AT(V1t+D1t), p4=p2p3, with respective complexity O(LM2), O(M3), O(MPL), O(PM2). It is worth noting that P and S are usually greater than L and M, so the complexity of U is O(MP·max{L,M}). The complexity of V1t and D1t is O(MPL), and the complexity of other steps is O(MP). In PG-NLWSU, the computational complexity of WNLS is O[MS·(P/C)], where C is the parallel computing power of the GPU.

4.

Experiments and Analysis

In this section, we evaluated PG-NLWSU on simulated and real datasets.

4.1.

Evaluation Metrics

We consider four evaluation metrics to clearly compare the performance of the algorithms.

  • (1) Signal reconstruction error (SRE):

    Eq. (16)

    SRE(dB)=10·log10[E(X22)E(XX^22)],
    where X is the estimated abundance matrix, X^ is the true abundance matrix, and E(·) denotes expectation. The larger the SRE, the smaller the error between the estimated and true abundance matrices.

  • (2) Probability of success (Ps), which is an estimate of the probability that the relative error capability is less than a certain threshold:

    Eq. (17)

    PsP(XX^2X2threshold),
    where X is the estimation matrix and X^ is the true matrix. According to previous work in SUnSAL-TV, we assume threshold3.16(5  dB). When Ps=1, then the total relative error power of abundance is less than 1/3.16. We use Ps to supplement the stability evaluation index of the algorithm (SRE alone cannot derive the stability of the estimate).

  • (3) Sparsity:

    Eq. (18)

    Sparsity=smn,
    where s is the number of elements in X greater than 0.005, and mn is the total number of elements in the matrix X. The smaller the Sparsity the higher the sparsity of the matrix.

  • (4) Root mean square error (RMSE):

    Eq. (19)

    RMSE=1mni=1mj=1n(XijX^ij)2,
    where Xij, X^ij are the respective element values at position (i,j) in the estimation matrix and real matrix, m and n are the matrix dimensions. The smaller the RMSE, the smaller the error between the measured and true values.

Based on the above descriptions, the evaluation metrics, i.e., SRE and RMSE, are used to evaluate the error of algorithm unmixing results. Ps was used to evaluate the stability of the algorithm. Sparsity focused on comparing the sparsity of unmixing results.

4.2.

Experiment on Simulated Datasets

PG-NLWSU was tested on two simulated datasets to validate its unmixing performance. To facilitate the comparison, SUnSAL, CLSUnSAL, NCLS-TV, and SUnSAL-TV were run on the simulated dataset to evaluate the algorithm results.

4.2.1.

Simulated datasets

The spectral library AR224×240 used in the simulated data experiment is a combination of 240 mineral types from the United States Geological Survey (USGS) library. The reflectance values of the spectral characteristics are given in 224 spectral bands and are evenly distributed in the range of 0.4 to 2.5  μm.

  • 1. Simulated data cube 1 (DC1) is composed of nine endmember features randomly selected from spectral library A.51 It contains 100×100  pixels and 224 bands, with added Gaussian noise (SNR=30, 40, 50). The ground truth abundances of each endmember are shown in Fig. 3.

  • 2. As shown in Fig. 4, simulated data cube 2 (DC2) is composed of five endmembers, and the abundance matrix is generated using the hyperspectral imagery synthesis (EIAS) toolbox. The abundance distribution of the endmembers in DC2 is relatively uniform and contains much spatial information. Independent and uniformly distributed Gaussian noise (SNR=30, 40, 50) is added.

Fig. 3

Real abundances of endmembers in DC1, where (a)–(i) are abundances of endmembers 1 to 9, respectively.

JARS_15_1_016501_f003.png

Fig. 4

Real abundances of endmembers in DC2, where (a)–(e) are abundances of endmembers 1 to 5, respectively.

JARS_15_1_016501_f004.png

4.2.2.

Influence of regularization parameters

We tested the effect of regularization parameters of different algorithms on the SRE in the range of 0 to 0.1 on DC1. The results are shown in Fig. 5. Among the algorithms, PG-NLWSU had the highest accuracy. Comparing the curves of NCLS-TV and SUnSAL-TV, it can be seen that the spatial regularization parameter λTV has a greater influence than λ on the algorithm accuracy. By comparing SUnSAL, CLSUnSAL, NCLS-TV, and PG-NLWSU, it can be seen that the unmixing algorithm using spatial information more easily obtains the optimal solution.

Fig. 5

Influence of regularization parameters on algorithms: (a) SUnSAL, (b) NCLS-TV, (c) CLSUnSAL, (d) SUnSAL-TV (λ=0.01), and (e) PG-NLWSU.

JARS_15_1_016501_f005.png

4.2.3.

Optimization effect analysis

We compare the speed of nonlocal similarity block extraction (the outer loop) between G-NLWSU and PG-NLWSU on the following data:

  • (1) DC1: data size 100×100×224.

  • (2) DC2: data size 100×100×224.

  • (3) Cuprite dataset: data size 250×191×188.

The results are shown in Table 1.

Table 1

Comparison of computing efficiency between G-NLWSU and PG-NLWSU.

DatasetAlgorithmTimeSpeed
DC1G-NLWSU384.9466 (s)12.3871
PG-NLWSU31.4859 (s)
DC2G-NLWSU387.5336 (s)12.2086
PG-NLWSU31.7428 (s)
CupriteG-NLWSU282.1 (m)26.8667
PG-NLWSU10.5 (m)

Speed is defined as the ratio of the running times of the two algorithms. According to the information in the table, as image size and amount of information increase, the advantages of parallel acceleration become more obvious.

4.2.4.

Results on simulated data sets

We show the experimental results of PG-NLWSU and comparison algorithms on two simulated datasets. Figure 6 shows the unmixing results of different algorithms on endmembers 1, 3, 7, 8, and 9 of DC1 when SNR=30. Figure 7 shows the unmixing results of all the endmembers on DC2 for different algorithms when SNR=30. Table 2 shows the optimal parameters of all algorithms when SNR is fixed at 30, 40, and 50 dB on simulated data sets DC1 and DC2. Tables 3 and 4 show the index evaluation results of all algorithms on DC1 and DC2 when the parameters are optimal. In Tables 3 and 4, bold represents the best experimental result.

Fig. 6

Abundances of endmembers 1, 3, 7, 8, 9 estimated by all algorithms on DC1.

JARS_15_1_016501_f006.png

Fig. 7

Abundances of endmembers 1, 2, 3, 4, 5 estimated by all algorithms on DC2.

JARS_15_1_016501_f007.png

Table 2

Parameter values used by different algorithms in experiments on simulated datasets.

Data cubeAlgorithmSNR
30 dB40 dB50 dB
DC1SUnSALλ=5e2λ=5e3λ=5e4
NCLS-TVλ=5e3λ=7e4λ=7e4
CLSUnSALλ=9e2λ=5e2λ=5e3
SUnSAL-TVλ=5e3λ=5e3λ=5e4
λTV=5e3λTV=7e4λTV=7e4
Proposedλ=1e3λ=9e5λ=9e5
DC2SUnSALλ=5e3λ=5e3λ=5e3
NCLS-TVλ=5e3λ=5e3λ=5e3
CLSUnSALλ=9e2λ=9e2λ=9e2
SUnSAL-TVλ=5e3λ=5e3λ=5e3
λTV=5e3λTV=5e3λTV=5e4
Proposedλ=9e4λ=9e4λ=9e4

Table 3

Evaluation results of different algorithms in experiments on simulated dataset 1.

SNRAlgorithmSREPsSparsityRMSE
30 dBSUnSAL3.07490.59860.04550.3613
NCLS-TV4.15140.73520.07920.2820
CLSUnSAL4.10610.73450.06880.2850
SUnSAL-TV4.92350.82840.05720.2361
Proposed8.37680.98520.02070.1066
40 dBSUnSAL6.61550.93900.04810.1599
NCLS-TV6.79690.97930.07010.1534
CLSUnSAL7.93311.00000.04610.1181
SUnSAL-TV8.00800.99500.04240.1160
Proposed12.92241.00000.01830.0374
50 dBSUnSAL9.00110.99970.03880.0923
NCLS-TV9.90191.00000.04330.0750
CLSUnSAL11.96371.00000.02450.0467
SUnSAL-TV10.10440.99930.02590.0716
Proposed15.33581.00000.01460.0215

Table 4

Evaluation results of different algorithms in experiments on simulated dataset 2.

SNRAlgorithmSREPsSparsityRMSE
30 dBSUnSAL3.38580.79960.06880.3244
NCLS-TV5.62210.96380.07550.1938
CLSUnSAL5.09410.93020.06020.2189
SUnSAL-TV6.48530.97640.05110.1589
Proposed9.51631.00000.02150.0791
40 dBSUnSAL7.53521.00000.05430.1248
NCLS-TV7.26370.98590.05140.1328
CLSUnSAL8.52451.00000.04640.0994
SUnSAL-TV8.38070.98960.03170.1027
Proposed11.89721.00000.02040.0457
50 dBSUnSAL11.17401.00000.03150.0540
NCLS-TV11.36681.00000.04030.0516
CLSUnSAL12.45261.00000.02280.0402
SUnSAL-TV12.55861.00000.02490.0392
Proposed17.49991.00000.02080.0126

As can be seen from Figs. 6 and 7, the endmember abundance results estimated by SUnSAL are relatively poor. The result of CLSUnSAL is similar to that of SUnSAL but contains more information. Comparing NCLS-TV and SUnSAL-TV, one can find that TV regularization leads to excessive smoothing of abundance, which is obvious in the results of DC1 endmembers 7 and 8 and DC2 endmember 3. The unmixing effect of PG-NLWSU is significantly better than that of other algorithms, indicating that the nonlocal weighting matrix improves the performance of the algorithm.

From the results in Table 3, PG-NLWSU algorithm can provide higher SRE, Ps, sparsity, and RMSE results than that of other comparison algorithms at different noise levels especially for the worse noise scenario such as 30 dB, which means that PG-NLWSU has high robustness to noises.

According to the results in Table 4, the results of SRE, sparsity, and RMSE under all noise intensity show that the PG-NLWSU can generate sparser unmixing accuracy and than other algorithms primarily because DC2 maintains relatively uniform spatial distribution, indicating that the nonlocal weighted model effectively promotes the sparseness of the algorithm. For Ps, PG-NLWSU still captures the best result especially at 30 dB, which implies that the proposed PG-NLWSU algorithm is very stable.

The computational efficiency of all algorithms on DC1 and DC2 is shown in Table 5, and bold represents the best experimental results. SUnSAL takes the least time. NCLS-TV and SUnSAL-TV run slower because they must extract neighborhood information of images, whereas CLSUnSAL and PG-NLWSU run longer, but when the SNR increases, PG-NLWSU shows higher calculational efficiency.

Table 5

Time efficiency of various algorithms on simulated datasets (in seconds).

Data cubeAlgorithmSNR
30 dB40 dB50 dB
DC1SUnSAL4.82103.80513.6672
NCLS-TV70.8100111.4890138.1508
CLSUnSAL146.7633160.1402158.1793
SUnSAL-TV71.9640112.7408151.6289
Proposed380.1912283.4539167.6324
DC2SUnSAL4.60423.86884.1016
NCLS-TV64.492160.986660.3077
CLSUnSAL161.0892152.7736154.1891
SUnSAL-TV74.396475.287166.1086
Proposed379.0015206.9122154.2739

4.3.

Experiment on Real Dataset

We experimentally evaluate the algorithm on the AVIRIS Cuprite dataset, whose information was collected from a copper mine in Nevada and is widely used to verify spectral unmixing algorithms. Figure 8 shows the mineral distribution map drawn by USGS in 1995. The band range is 0.4 to 2.5  μm. A 250×191-pixel subregion with 188 bands is considered, where noisy bands 1 and 2 and water-absorption bands 104 to 113 and 148 to 167 are removed from the original 224 bands. The spectral library AR188×498 also processed and reserved 188 bands of the USGS spectral library.

Fig. 8

USGS uses Tricorder 3.3 software to obtain the mineral distribution map of the Cuprite mining district in Nevada.

JARS_15_1_016501_f008.png

Figure 9 shows the Cuprite data cube and spectral library A1. Since no real abundance map can be used, the experimental results of this algorithm are compared with those of SUnSAL, CLSUnSAL, NCLS-TV, and SUnSAL-TV, with results as shown in Fig. 10.

Fig. 9

(a) Hyperspectral cubes in cuprite area and (b) spectral library A1.

JARS_15_1_016501_f009.png

Fig. 10

Abundance results of different algorithms on Cuprite datasets.

JARS_15_1_016501_f010.png

Figure 10 compares the estimated abundances of three minerals (alunite, buddingtonite, and chalcedony) in the cuprite area by SUnSAL, NCLS-TV, CLSUnSAL, SUnSAL-TV, and PG-NLWSU. We set the parameters of SUnSAL, NCLS-TV, CLSUnSAL, and PG-NLWSU to λ=1e3, λTV=3e9, λ=9e2 and λ=9e5, respectively, and set the parameters of SUnSAL-TV to λ=1e3, λTV=3e9. The abundance graph of buddingtonite clearly shows the performance of different algorithms in dealing with noise, whereas chalcedony shows the algorithm’s more detailed unmixing ability. It can be seen that the abundance graph of PG-NLWSU performs better than those of the other algorithms.

5.

Conclusions

SU is a classical method to solve the problem of spectral unmixing. To use a weighted 1 norm to enhance the sparsity of the abundance matrix is a current research hotspot. We propose a nonlocal weighted sparse unmixing algorithm (PG-NLWSU) based on global search and parallel acceleration. The nonlocal mean is introduced to the sparse decomposition model through a weighting factor, for a new method to utilize the nonlocal information of the image. To fully use this nonlocality, we use a search box of the same size as the image, and parallel computing improves the computing efficiency. Experimental results show that the proposed algorithm performs better in images with abundant nonlocal information. In future work, we will consider the texture information of the image and use it to eliminate the smoothing effect caused by the use of neighborhood or nonlocal information.

6.

Appendix A: Model Solution

Equation (15) can be written as

Eq. (20)

minU,Vg(V)s.t.  GU+BV=0,
where

Eq. (21)

V=(V1,V2,V3),g(V)=12V1YF2+λWNLSV21+ιR+(V3),G=[AII]B=[I000I000I].

Equation (20) is solved by the ADMM algorithm, and its augmented LaGrange function is as follows:

Eq. (22)

L(U,V,D)g(U,V)+μ2GU+BVDF2,
where μ>0, Dμ is the LaGrange multiplier.

Equation (22) is expanded as

Eq. (23)

L(U,V1,V2,V3,D1,D2,D3)=12V1YF2+λWNLSV21+ιR+(V3)+μ2AUV1D1F2+μ2UV2D2F2+μ2UV3D3F2.

In Eq. (23), the variables need to be separated during the solution process. When solving for a variable (e.g., U), other variables default to fixed values (e.g., V, D). We separately solve the three variables in the above formula, and the optimization function is

Eq. (24)

Ut+1argminUμ2AUV1D1F2+μ2UV2D2F2+μ2UV3D3F2.

The matrix derivation solution is

Eq. (25)

Ut+1(ATA+2I)1[AT(V1t+D1t)+(V2t+D2t)+(V3t+D3t)].

The optimization function of V1 is

Eq. (26)

V1t+1argminV112V1YF2+μ2AUV1D1F2,
its solution is

Eq. (27)

V1t+111+μ[Y+μ(AUtD1t)].

The optimization function of V2 is

Eq. (28)

V2t+1argminV2λWNLSV21+μ2UV2D2F2,
with solution

Eq. (29)

V2t+1soft(UtD2t,λμWNLS),
where soft(a,b) is a soft threshold function, expressed as

Eq. (30)

soft(a,b)sign(a)max{|a|b,0},
where sign(a) is a sign function
{1,a>01,a<0.

The optimization function of V3 is

Eq. (31)

V3t+1argminV3ιR+(V3)+μ2UV3D3F2,
since the actual role of ιR+ is to remove the nonnegative value of the solution, the solution of V3 is

Eq. (32)

V3t+1max(UtD3t,0).

Acknowledgments

This work was supported by the Natural Science Foundation of Ningxia Province of China (Project No. 2020AAC02028) and the Innovation Projects for Graduate Students of North Minzu University (Project No. YCX20064). We thank the Image and Intelligence Information Processing Innovation Team of the National Ethnic Affairs Commission of China and the Science Foundation of North Minzu University (Project No. 2020KYQD50) for their support. Disclosures: The authors declare no conflict of interest.

Data, Materials, and Code Availability

The EIAS toolbox used in Sec. 4.2.1 of this article is available at the following web site: http://www.ehu.es/ccwintco/index.php/Hyperspectral_Imagery_Synthesis_tools_for_MATLAB.

References

1. 

J. M. Bioucas-Dias et al., “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag., 1 (2), 6 –36 (2013). https://doi.org/10.1109/MGRS.2013.2244672 Google Scholar

2. 

N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEE Signal Process. Mag., 19 (1), 44 –57 (2002). https://doi.org/10.1109/79.974727 ISPRE6 1053-5888 Google Scholar

3. 

A. Plaza et al., “Foreword to the special issue on spectral unmixing of remotely sensed data,” IEEE Trans. Geosci. Remote Sens., 49 (11), 4103 –4110 (2011). https://doi.org/10.1109/TGRS.2011.2167193 IGRSD2 0196-2892 Google Scholar

4. 

Q. Du et al., “End-member extraction for hyperspectral image analysis,” Appl. Opt., 47 (28), F77 –F84 (2008). https://doi.org/10.1364/AO.47.000F77 APOPAI 0003-6935 Google Scholar

5. 

J. M. Bioucas-Dias et al., “Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5 (2), 354 –379 (2012). https://doi.org/10.1109/JSTARS.2012.2194696 Google Scholar

6. 

J. W. Boardman, “Automating spectral unmixing of aviris data using convex geometry concepts,” in Proc. Fourth Annu. Airborne Gcosci. Workshop, 11 –14 (1993). Google Scholar

7. 

J. W. Boardman, F. A. Kruse and R. O. Green, “Mapping target signatures via partial unmixing of AVIRIS data,” in Proc. 5th Annu. JPL Airborne Earth Sci. Workshop, 23 –26 (1995). Google Scholar

8. 

M. E. Winter, “N-FINDR: an algorithm for fast autonomous spectral endmember determination in hyperspecral data,” Proc. SPIE, 3753 266 –275 (1999). https://doi.org/10.1117/12.366289 Google Scholar

9. 

C. C. Wu, S. Chu and C. I. Chang, “Sequential N-FINDR algorithms,” Proc. SPIE, 7086 70860C (2008). https://doi.org/10.1117/12.795262 PSISDG 0277-786X Google Scholar

10. 

B. Zhang, W. Luo and X. Jia, “New improvements in parallel implementation of N-FINDR algorithm,” IEEE Trans. Geosci. Remote Sens., 50 (10), 3648 –3659 (2012). https://doi.org/10.1109/TGRS.2012.2185056 IGRSD2 0196-2892 Google Scholar

11. 

J. M. P. Nascimento and J. M. B. Dias, “Does independent component analysis play a role in unmixing hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 43 (1), 175 –187 (2005). https://doi.org/10.1109/TGRS.2004.839806 IGRSD2 0196-2892 Google Scholar

12. 

C. I. Chang et al., “A new growing method for simplex-based endmember extraction algorithm,” IEEE Trans. Geosci. Remote Sens., 44 (10), 2804 –2819 (2006). https://doi.org/10.1109/TGRS.2006.881803 IGRSD2 0196-2892 Google Scholar

13. 

J. H. Gruninger, A. J. Ratkowski and M. L. Hoke, “The sequential maximum angle convex cone (SMACC) endmember model,” Proc. SPIE, 5425 1 –14 (2004). https://doi.org/10.1117/12.543794 PSISDG 0277-786X Google Scholar

14. 

T. H. Chan et al., “A simplex volume maximization framework for hyperspectral endmember extraction,” IEEE Trans. Geosci. Remote Sens., 49 (11), 4177 –4193 (2011). https://doi.org/10.1109/TGRS.2011.2141672 IGRSD2 0196-2892 Google Scholar

15. 

J. Li and J. M. Bioucas-Dias, “Minimum volume simplex analysis: a fast algorithm to unmix hyperspectral data,” in IGARSS 2008-2008 IEEE Int. Geosci. and Remote Sens. Symp., III–250 –III–253 (2008). https://doi.org/10.1109/IGARSS.2008.4779330 Google Scholar

16. 

T.-H. Chan et al., “A convex analysis based minimum-volume enclosing simplex algorithm for hyperspectral unmixing,” IEEE Trans. Signal Process., 57 (11), 4418 –4432 (2009). https://doi.org/10.1109/TSP.2009.2025802 ITPRED 1053-587X Google Scholar

17. 

J. Bioucas-Dias, “A variable splitting augmented Lagrangian approach to linear spectral unmixing,” in First Workshop on Hyperspectral Image and Signal Process.: Evol.in Remote Sens., 1 –4 (2009). Google Scholar

18. 

L. Miao and H. Qi, “Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization,” IEEE Trans. Geosci. Remote Sens., 45 765 –777 (2007). https://doi.org/10.1109/TGRS.2006.888466 IGRSD2 0196-2892 Google Scholar

19. 

J. D. Bayliss, J. A. Gualtieri and R. F. Cromp, “Analyzing hyperspectral data with independent component analysis,” Proc. SPIE, 3240 133 –143 (1998). https://doi.org/10.1117/12.300050 PSISDG 0277-786X Google Scholar

20. 

C. H. Chen and X. Zhang, “Independent component analysis for remote sensing study,” Proc. SPIE, 3871 150 –158 (1999). https://doi.org/10.1117/12.300050 PSISDG 0277-786X Google Scholar

21. 

T. M. Tu, “Unsupervised signature extraction and separation in hyperspectral images: a noise-adjusted fast independent component analysis,” Opt. Eng., 39 (4), 897 –906 (2000). https://doi.org/10.1117/1.602461 Google Scholar

22. 

J. M. P. Nascimento and J. M. Bioucas-Dias, “Hyperspectral unmixing based on mixtures of Dirichlet components,” IEEE Trans. Geosci. Remote Sens., 50 (3), 863 –878 (2012). https://doi.org/10.1109/TGRS.2011.2163941 IGRSD2 0196-2892 Google Scholar

23. 

S. Moussaoui et al., “Bayesian analysis of spectral mixture data using Markov chain Monte Carlo methods,” Chemometr. Intell. Lab. Syst., 81 (2), 137 –148 (2006). https://doi.org/10.1016/j.chemolab.2005.11.004 Google Scholar

24. 

M. Berman et al., “Ice: a statistical approach to identifying endmembers in hyperspectral images,” IEEE Trans. Geosci. Remote Sens., 42 (10), 2085 –2095 (2004). https://doi.org/10.1109/TGRS.2004.835299 IGRSD2 0196-2892 Google Scholar

25. 

A. Plaza et al., “Spatial/spectral endmember extraction by multidimensional morphological operations,” IEEE Trans. Geosci. Remote Sens., 40 (9), 2025 –2041 (2002). https://doi.org/10.1109/TGRS.2002.802494 IGRSD2 0196-2892 Google Scholar

26. 

D. M. Rogge et al., “Integration of spatial–spectral information for the improved extraction of endmembers,” Remote Sens. Environ., 110 (3), 287 –303 (2007). https://doi.org/10.1016/j.rse.2007.02.019 Google Scholar

27. 

G. Martin and A. Plaza, “Region-based spatial preprocessing for endmember extraction and spectral unmixing,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (4), 745 –749 (2011). https://doi.org/10.1109/LGRS.2011.2107877 Google Scholar

28. 

G. Martin, “Spatial-spectral preprocessing prior to endmember identification and unmixing of remotely sensed hyperspectral data,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5 (2), 380 –395 (2012). https://doi.org/10.1109/JSTARS.2012.2192472 Google Scholar

29. 

S. Moussaoui et al., “On the decomposition of mars hyperspectral data by ICA and Bayesian positive source separation,” Neurocomputing, 71 (10–12), 2194 –2208 (2007). https://doi.org/10.1016/j.neucom.2007.07.034 NRCGEO 0925-2312 Google Scholar

30. 

M.-D. Iordache, J. M. Bioucas-Dias and A. Plaza, “Sparse unmixing of hyperspectral data,” IEEE Trans. Geosci. Remote Sens., 49 (6), 2014 –2039 (2011). https://doi.org/10.1109/TGRS.2010.2098413 IGRSD2 0196-2892 Google Scholar

31. 

J. M. Bioucas-Dias and M. A. T. Figueiredo, “Alternating direction algorithms for constrained sparse regression: application to hyperspectral unmixing,” in IEEE Workshop Hyperspectral Image and Signal Process.: Evol. in Remote Sens., (2010). https://doi.org/10.1109/WHISPERS.2010.5594963 Google Scholar

32. 

M.-D. Iordache, J. Bioucas-Dias and A. Plaza, “Collaborative sparse regression for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., 52 (1), 341 –354 (2013). https://doi.org/10.1109/TGRS.2013.2240001 IGRSD2 0196-2892 Google Scholar

33. 

L. Sun et al., “A novel 1/2 sparse regression method for hyperspectral unmixing,” Int. J. Remote Sens., 34 (20), 6983 –7001 (2013). https://doi.org/10.1080/01431161.2013.804225 IJSEDK 0143-1161 Google Scholar

34. 

Y. Z. Cheng et al., “Reweighted sparse regression for hyperspectral unmixing,” in IEEE Geosci. and Remote Sens. Symp., (2016). Google Scholar

35. 

M.-D. Iordache, J. M. Bioucas-Dias and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., 50 (11), 4484 –4502 (2012). https://doi.org/10.1109/TGRS.2012.2191590 IGRSD2 0196-2892 Google Scholar

36. 

S. Zhang et al., “Hyperspectral unmixing based on local collaborative sparse regression,” IEEE Geosci. Remote Sens. Lett., 13 (5), 631 –635 (2016). https://doi.org/10.1109/LGRS.2016.2527782 Google Scholar

37. 

R. Wang et al., “Double reweighted sparse regression for hyperspectral unmixing,” in IGARSS 2016–2016 IEEE Int. Geosci. Remote Sens. Symp., (2016). Google Scholar

38. 

R. Wang et al., “Hyperspectral unmixing using double reweighted sparse regression and total variation,” IEEE Geoence Remote Sens. Lett., 14 (7), 1146 –1150 (2017). https://doi.org/10.1109/LGRS.2017.2700542 Google Scholar

39. 

S. Wang et al., “Double reweighted sparse regression and graph regularization for hyperspectral unmixing,” Remote Sens., 10 (7), 1046 (2018). https://doi.org/10.3390/rs10071046 Google Scholar

40. 

S. Zhang et al., “Spectral-spatial weighted sparse regression for hyperspectral image unmixing,” IEEE Trans. Geosci. Remote Sens., 56 (6), 3265 –3276 (2018). https://doi.org/10.1109/TGRS.2018.2797200 IGRSD2 0196-2892 Google Scholar

41. 

Y. Zhong, R. Feng and L. Zhang, “Non-local sparse unmixing for hyperspectral remote sensing imagery,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7 (6), 1889 –1909 (2014). https://doi.org/10.1109/JSTARS.2013.2280063 Google Scholar

42. 

R. Feng, Y. Zhong and L. Zhang, “An improved nonlocal sparse unmixing algorithm for hyperspectral imagery,” IEEE Geosci. Remote Sens. Lett., 12 (4), 915 –919 (2015). https://doi.org/10.1109/LGRS.2014.2367028 Google Scholar

43. 

E. J. Candès, M. B. Wakin and S. P. Boyd, “Enhancing sparsity by reweighted 1 minimization,” J. Fourier Anal. Appl., 14 (5), 877 –905 (2007). https://doi.org/10.1007/s00041-008-9045-x Google Scholar

44. 

Z. J. Zhang, Y. Hui and C. S. Wang, “A non-local means based classification method of hyperspectral image,” in Int. Conf. Optoelectron. and Microelectron. (ICOM), (2015). Google Scholar

45. 

J. Wang et al., “Adaptive nonlocal spatial–spectral kernel for hyperspectral imagery classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 9 (9), 1 –16 (2016). https://doi.org/10.1109/JSTARS.2016.2610587 Google Scholar

46. 

A. Buades, B. Coll and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit. (CVPR’05), 60 –65 (2005). https://doi.org/10.1109/CVPR.2005.38 Google Scholar

47. 

L. Sun et al., “Weighted nonlocal low-rank tensor decomposition method for sparse unmixing of hyperspectral images,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 13 1174 –1188 (2020). https://doi.org/10.1109/JSTARS.2020.2980576 Google Scholar

48. 

Q. Qu, N. M. Nasrabadi and T. D. Tran, “Abundance estimation for bilinear mixture models via joint sparse and low-rank representation,” IEEE Trans. Geosci. Remote Sens., 52 (7), 4404 –4423 (2014). https://doi.org/10.1109/TGRS.2013.2281981 IGRSD2 0196-2892 Google Scholar

49. 

Z. Wu et al., “Real-time implementation of the sparse multinomial logistic regression for hyperspectral image classification on gpus,” IEEE Geosci. Remote Sens. Lett., 12 (7), 1456 –1460 (2015). https://doi.org/10.1109/LGRS.2015.2408433 Google Scholar

50. 

Z. Wu et al., “GPU parallel implementation of spatially adaptive hyperspectral image classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 11 (4), 1 –13 (2017). https://doi.org/10.1109/JSTARS.2017.2755639 Google Scholar

51. 

X. Wang et al., “Spatial group sparsity regularized nonnegative matrix factorization for hyperspectral unmixing,” IEEE Trans. Geosci. Remote Sens., 55 (11), 6287 –6304 (2017). https://doi.org/10.1109/TGRS.2017.2724944 IGRSD2 0196-2892 Google Scholar

Biography

Yongxin Li received his BEng degree in computer science and technology from Xinke College of Henan Institute of Science and Technology, Xinxiang, China, in 2018. He is currently pursuing his MS degree at the School of Computer Science and Engineering, North Minzu University, Yinchuan, China. His research interests include hyperspectral image processing.

Wenxing Bao received his BEng degree in industrial automation from Xidian University, Xi’an,China, in 1993, his MSc degree in electrical engineering from Xi’an Jiaotong University, Xi’an, China, in 2001, and his PhD in electronic science and technology from Xi’an Jiaotong University, Xi’an, China, in 2006. He is currently a professor. His current research interests include digital image processing, remote sensing image classification, and fusion.

Kewen Qu received his MS degree in computer application technology from North Minzu University, Yinchuan, China, in 2011. He is currently pursuing a PhD at the School of Computers and Information, Hefei University of Technology, Hefei, China. His research interests include remote sensing image processing, statistical learning, and deep learning.

Xiangfei Shen received his BS and MS degrees in computer science and technology from North Minzu University, Yinchuan, China, in 2017 and 2020, respectively. His research interests include hyperspectral image processing, pattern recognition, and machine learning.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yongxin Li, Wenxing Bao, Kewen Qu, and Xiangfei Shen "Nonlocal weighted sparse unmixing based on global search and parallel optimization," Journal of Applied Remote Sensing 15(1), 016501 (11 January 2021). https://doi.org/10.1117/1.JRS.15.016501
Received: 8 September 2020; Accepted: 24 December 2020; Published: 11 January 2021
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Computer simulations

Parallel computing

Signal to noise ratio

Optimization (mathematics)

Error analysis

Hyperspectral imaging

Minerals

Back to Top