首页 > , Vol. , Issue () : -
基于卷积神经网络的高光谱图像分类是当前的研究热点，先后发展了空洞卷积、可形变卷积等先进模型。然而，现有可形变卷积只在空间维偏移，忽略了高光谱图像光谱之间的差异信息。为此，本文将可形变卷积从空间维扩展到光谱维，设计了光谱可形变卷积，提出了光谱可形变卷积网络（Spectral deformable convolutional neural network，SDCNN）。首先，利用全连接层学习光谱可形变卷积的偏移量，采用线性差值对图像光谱维进行特征校准；其次，采用多层1×1卷积进行光谱维特征聚合；最后，使用三维卷积层提取光谱-空间联合特征。不同于空间可形变卷积，光谱可形变卷积只在光谱维上进行偏移，可以为不同类别选择更合适的特征波段，提升模型的判别性。在国际通用测试数据Indian Pines、University of Pavia以及University of Houston上进行了实验，结果表明：本文提出的SDCNN方法优于其它深度学习方法，在相同样本条件下取得了更高的分类精度，总体精度达到了98.86%（Indian Pines，10%）、99.81%（University of Pavia，5%）以及97.41%（University of Houston，50），验证了该方法的有效性。
Objectives: Currently, convolutional neural network is a research hot spot in hyperspectral image classification. Some advanced models such as dilated convolution and deformable convolution have been developed successfully. However, the existing deformable convolution modules only shift in spatial domain and ignore the spectral difference information. Therefore, this paper proposed a novel deformable convolution, which extends spatial deformable to the spectral domain. In addition, a spectral deformable convolutional neural network (SDCNN) is proposed for hyperspectral image classification. Method: In view of the existing problems in the application of spatial deformable convolution in HSI classification, this paper extends the deformable convolution to the spectral dimension, and proposes the spectral deformable convolution. Given the fact that different feature categories may have different classification effects, more appropriate classification bands can be selected for different class by the learnt shifts in spectral domain. Therefore, the spectral feature extraction can focus on more effective band, promoting more discriminative features. In this way, only one direction of spectral dimension needs to be learned, which reduces the computational complexity, i.e., only a half of the occupying time than that of spatial deformable convolution. More details are as follows. Firstly, a full connection layer is used to learn the offset of the spectral deformable convolution, and a linear difference is used to perform feature correction in spectral domain. Secondly, a multilayer 1×1 convolution is used for spectral feature aggregation. Finally, a three-dimensional convolution layer is used to extract spectral-spatial features. Results: Experiments were conducted on three international popular datasets including Indian Pines, University of Pavia and University of Houston. The experimental results demonstrate that the SDCNN is superior to other deep learning methods. SDCNN yields the highest classification accuracy with the overall accuracy of 98.86% (Indian pines, 10%), 99.81% (University of Pavia, 5%) and 97.41% (University of Houston, 50), which can well verify the effectiveness of the proposed model. Conclusions: First of all, through comprehensive experiments, SDCNN yields the highest accuracy compared to other existing models on three international popular datasets, which can prove the validity of SDCNN method. Secondly, the effectiveness of the spectral deformable convolution module is proved by comparing the traditional dilated convolution module and the spatial deformable convolution module. Finally, SDCNN generalizes better with limited training samples, especially when the number of labeled samples is small.