首页 > , Vol. , Issue () : -
高光谱遥感域自适应分类旨在利用有标注样本的源域知识对无标注的目标域场景进行分类，是高光谱跨场景分类的重要方法之一。目前流行的域自适应分类方法利用对抗训练模式实现目标域与源域的特征对齐，但未考虑源域知识是否充分转移至目标域这一关键问题。为了有效提取并迁移源域知识，本文提出一种基于对抗与蒸馏耦合模式的高光谱遥感自适应分类方法(Unsupervised Domain Adaptation by Adversary Coupled with Distillation,UDAACD)。该方法采用类内样本自蒸馏方式对源域信息进行提炼,提高自适应分类模型对源域监督知识的提取能力；同时,构建知识蒸馏与对抗耦合机制使目标域与源域特征在对抗与蒸馏中实现对齐，利用对抗与蒸馏耦合机制相互补充、相互促进,提升高光谱遥感知识从源域至目标域的迁移能力，进而完成目标域高光谱影像的无监督分类。本文选用Pavia University、Pavia Center、Houston 2013及Houston 2018高光谱遥感场景数据集进行了四组跨场景图像分类实验，结果表明所提出的模型优于其他高光谱域自适应方法，在相同样本条件下取得了较高的分类精度，准确率分别为91.75%（Pavia University->Pavia Center）、74.41%（Pavia Center->Pavia University）、70.68%（Houston 2013->Houston 2018）及67.76%（Houston 2018->Houston 2013），验证了方法的鲁棒性。
Unsupervised domain adaptive (UDA) classification aims to classify the target domain scenes without labeled samples by using knowledge from the source domain data with the labeled samples, which is one of the important cross-scene classification methods in the field of hyperspectral image classification (HSIC). The existing domain adaptive classification methods for hyperspectral remote sensing data mainly utilize the adversarial training mode to achieve the feature alignment between the target domain and the source domain. Although the popular UDA approach with the condition of local alignment of dual domains generates acceptable classification accuracy, the key issue of whether the source domain knowledge is sufficiently transferred to the target domain is not considered. To effectively extract and transfer source domain knowledge, in this paper, an Unsupervised Domain Adaptation Classification by Adversary Coupled with Distillation (UDAACD) is proposed for the unsupervised HSIC. In the proposed framework, the dense-base network with Convolutional Block Attention Module is presented to extract the abundant features for the representation of the category of the source and target domain. In the source domain training, a self-distillation learning schema is adopted to reduce the class-wised difference by matching the predictive distribution of the same class samples. The self-distillation regularization constraint is increased between the samples of the same category in the source domain to reduce the intra-class difference of the classification subspace and improve the knowledge expression accuracy of the source domain classification model. Thus, the ability of the adaptive classification model to refine the source domain supervision knowledge is improved. Besides, a novel mechanism of adversarial training coupled with distillation knowledge is presented to guarantee that the source domain knowledge fully transfers to the target domain scene with feature alignment. Moreover, dual classifiers are employed in the process of adversarial training to eliminate the effect of the prediction of the confused samples. The maximum and minimum discrepancy of the dual classifiers during the adversarial training promote the feature alignment rapidly without confusion. In this way, knowledge distillation is carried out to improve the recognition ability of the network in the domain, while ensuring the full transfer of hyperspectral source domain knowledge in the feature alignment process, to improve the knowledge acquisition ability of the model in the target domain. Finally, the unsupervised classification of hyperspectral images in the target domain is completed after the knowledge transfer. The experiments for HSI cross-scene image classification are conducted on three hyperspectral remote sensing scene datasets including Pavia University, Pavia Center, Houston 2013 and Houston 2018. The results demonstrate that the proposed model is superior to other hyperspectral domain adaptive methods. Under the same sample conditions, the classification accuracy achieves 91.75% (Pavia University to Pavia Center), 74.41% (Pavia Center to Pavia University), 70.68% (Houston 2013 to Houston 2018) and 67.76% (Houston 2018 to Houston 2013), respectively. In addition, the ablation study illustrates that the final classification accuracy of the unsupervised HSIC is improved with the self-distillation and the distillation loss in the adversarial training model. The analysis of parameters with different weights and temperatures are analyzed in the experiments with the variation of the values. The validity of the method is verified by all the mentioned experimental results and analyses.