首页 > , Vol. , Issue () : 1993-2002
遥感图像数据规模大，光照、遮挡等情况复杂，目标密集、尺度不一以及缺乏大量带标注图像用于训练深度网络等特点对遥感图像分割的完整性和正确性造成了更大的挑战。针对深度卷积网络中多次池化造成分辨率显著下降，使像素类别预测精度降低，本文在深度卷积编码-解码网络的基础上设计了一个采用完全残差连接和多尺度特征融合的端到端遥感图像分割模型。该模型具有两方面优点：首先，长距离和短距离的完全残差连接既简化了深层网络的训练，又为本层末端融入了原始输入信息，增强了特征融合。其次，不同尺度和方式的特征融合使网络能够提取丰富的上下文信息，应对目标尺度变化，提升分割性能。本文通过对ISPRS Vaihingen和Road Detection数据集做数据扩充并进行实验，分别从平均IOU、平均F1值两方面对模型进行评价。通过与目前先进的模型以及文献中的研究成果进行比较，结果表明本文所提模型优于对比模型，在两个数据集上的平均IOU分别达到了85%和84%，平均F1值分别达到了92%和93%，能够有效提高遥感图像目标分割的完整性和正确性。
Many characteristics of remote sensing images, such as large scale, complex illumination and occlusion, dense, multi-scale and various postures targets, and lack of a large number of labeled images for training depth networks, pose greater challenge to the integrity and accuracy of remote sensing image segmentation. In deep convolutional networks for segmentation, resolution is significantly reduced by multiple pooling, which makes the prediction accuracy of pixel class reduced. Based on the deep convolutional coding-decoding network, an end-to-end remote sensing image segmentation model with full residual connection and multi-scale feature fusion is proposed in this paper. The model has two advantages: firstly, complete residual connection including long-distance and short-distance not only simplifies the training of deep network, but also integrates the original input information into the end of the layer to enhance the feature fusion. Secondly, the feature fusion of different scales and ways enables the network to extract rich context information, thus can cope with target scale changes, and then improve segmentation performance. In this paper, data augmentation and experiments were performed on ISPRS Vaihingen and Road Detection datasets. The proposed model was evaluated from two aspects of average IOU and average F1-score.By comparing with the current advanced models and the results in the literatures, the results show that the proposed model is better than the comparison models. The average IOU on the two datasets is 85% and 84%, and the average F1 value is 92% and 93% respectively. It can effectively improve the integrity and correctness of remote sensing image segmentation.