首页 > , Vol. , Issue () : -
The performance of hyperspectral image supervised classification largely depends on the quantity and quality of labeled samples. However, labeling hyperspectral data is a difficult and time-consuming procedure. When the number of labeled samples is too low, supervised classifiers suffer from a high risk of overfitting problem. In this paper, we address the problem of unsupervised domain adaptation, where labeled samples of old images (source domain) are used to classify new hyperspectral data (target domain). The existing domain adaptation methods aim at learning the domain-invariant features in a new space. However, a large proportion of these methods align only the overall statistics of the two domains, while neglecting to reduce the spectral shifts in each class. Other methods are expected to align every class of the source and target domains simultaneously. On the one hand, these methods will be misled by wrong label information. On the other hand, aligning multiple classes may reduce data separability, due to the mixture of samples from different classes. In this paper, we propose a class-independent domain adaptation algorithm for hyperspectral image classification. Our method first constructs an independent subspace for each class and then aligns the samples of two domains in this subspace. In each class-independent subspace, the posterior probabilities of the target domain are learned by using the aligned samples. Then, the posterior probabilities obtained from multiple subspaces are fused to produce the classification labels, aiming at increasing the confidence of results. Finally, the classification labels are smoothed and used as pseudolabels for iterative learning. Moreover, we also present a strategy of selecting the representative sample set to obtain better subspaces. Experimental results on two real hyperspectral datasets show that our proposed method has high classification performance. Compared with the joint domain adaptation algorithm, the accuracy of our proposed method with the nearest neighbor classifier is improved by 9.56% on Honghu data and 18.45% on Wen-County data.