首页 >  2019, Vol. 23, Issue (4) : 630-647

摘要

全文摘要次数: 1880 全文下载次数: 990
引用本文:

DOI:

10.11834/jrs.20197547

收稿日期:

2017-12-26

修改日期:

PDF Free   HTML   EndNote   BibTeX
融合数据内部变化信息的丰度估计算法
1.清华大学 电子工程系, 北京 100084;2.防灾科技学院, 廊坊 065201
摘要:

丰度估计(AE)是从高光谱图像中识别地物的关键预处理技术。由于线性模型的可解释性以及数学上的可操作性,基于该模型的线性回归技术CLR(Constrained Linear Regression)在丰度估计中受到了广泛关注。目前,该方法仅仅考虑到了估计数据与被估计数据之间的能量相似性,没有考虑数据内部的变化信息之间的相似性,比如一阶梯度之间的相似性以及二阶梯度之间的相似性。为了提高丰度估计精度,本文提出了融合数据内部变化信息的稀疏低秩丰度估计算法。首先通过增加一阶梯度和二阶梯度的约束项改进传统的丰度估计的数学模型。其次,通过采用范数不等式和优化理论证明了在约束条件下,该模型的有效性及该模型在相关领域的可拓展性。接着,采用辅助变量将改进的数学模型变为增强拉格朗日函数。最后,采用交替双向乘子技术ADMM(Alternating Direction Method of Multipliers)求解该模型并估计高光谱图像的丰度。经仿真实验和实际高光谱图像的实验证明该方法能够改善仿真数据和实际高光谱数据的丰度估计的效果,特别是当端元的丰度存在丰富的变化细节时,丰度估计的精度和抗噪性能均优于当前较流行的丰度估计算法。

Sparse and low-rank abundance estimation with internal variability
Abstract:

Abundance estimation (AE) plays an important role in the processing and analysis of hyperspectral images. Constrained linear regression is usually developed to estimate abundance matrix due to its simplicity and mathematical tractability. However, this approach only focuses on the fitness between the estimated and ground-truth data without considering the internal variability such as the similarity among the first-order gradients and among the second-order gradients. To improve the accuracy of the AE, a novel method of adding internal variability to sparse low-rank AE was proposed.
First, first-and second-order gradient constraint terms were used to modify the traditional mathematical model of sparse and low-rank AE. Second, norm inequality and optimization theory were applied to demonstrate the validity of the novel model. The model has been proven applicable to other related fields under constraint conditions. Third, auxiliary variables were utilized to transform the mathematic model to the enhanced Lagrange function (ELF). Finally, the ELF was solved by the alternating direction method of multipliers to estimate the abundance of hyperspectral images. In general, the traditional method of sparse and low-rank AE is the alternating direction sparse and low-rank unmixing (ADSpLRU). In this study, ADSpLRU-FOG refers to the method that adds the first-order gradient to the sparse and low-rank AE, whereas ADSpLRU-FSOG refers to the method that adds first-and second-order gradients to the sparse and low-rank AE.
Experiment carried on the USUG library showed that, (1) in the convergent experiment, ADSpLRU-FOG and ADSpLRU-FSOG algorithms converged to a slightly lower NMSE than ADSpLRU. ADSpLRU-FSOG algorithm converged to the lowest NMSE among the three methods. (2) In the robust experiment, ADSpLRU-FOG and ADSpLRU-FSOG algorithms reached higher estimation accuracy than ADSpLRU in terms of SRE under white and colored noises. Among them, ADSpLRU-FSOG achieved remarkably higher SRE value than the other methods. (3) In the visual experiment, ADSpLRU-FOG algorithm could maintain the first-order gradient structure of the data more than ADSpLRU. Meanwhile, the ADSpLRU-FSOG algorithm could preserve the second-order gradient structure of the data better than ADSpLRU-FOG and ADSpLRU algorithms. Experiment based on the Urban and Jasper actual hyperspectral database showed that the accuracy of abundance matrix estimation from ADSpLRU-FSOG was better than those from ADSpLRU and ADSpLRU-FOG.
Experimental results suggest that the novel method of adding the internal variability to the abundance matrix estimation can improve convergent behavior, maintain the structure of information of first-order and second-order gradients, obtain comparable estimation accuracy, and enhance robust performance for AE.

本文暂时没有被引用!

欢迎关注学报微信

遥感学报交流群