首页 >  , Vol. , Issue () : -

摘要

全文摘要次数: 486 全文下载次数: 820
引用本文:

DOI:

10.11834/jrs.20221556

收稿日期:

2021-08-16

修改日期:

2022-04-11

PDF Free   EndNote   BibTeX
基于深度可分离卷积的在轨光学目标检测模型轻量化研究
吕晓宁, 夏玉立, 赵军锁, 乔鹏
中国科学院软件研究所 天基综合信息系统重点实验室
摘要:

目前应用广泛的神经网络模型结构复杂、参数量大,对星上有限的计算和存储资源占用较多。本文面向微纳卫星在轨计算平台,提出一种深度可分离卷积神经网络模型,该模型结合反向残差结构与通道注意力的思路改进在轨识别算法Yolov4网络模型结构,改进网络模型的局部模块结构,降低整体网络结构的深度与复杂度;同时利用可分离卷积结构实现空间卷积,改进SPP模块与PANet模块,降低模型参数量;通过合并卷积层与Batch Normalization层,进一步加快前向推理速度;此外借鉴Focal损失函数的思想,改进目标检测网络的损失函数,减缓前景与背景样本比例不均衡问题。通过与原Yolov4网络模型对比,在保证识别精度94.09%的前提下参数量降低约7倍,FLOPs降低约30倍。同时与Yolo系列、SSD、MobileNet、CenterNet等前沿网络模型对比的实验结果再一次验证了本文的算法性能,为实现在轨目标识别与过滤无用数据提供了理论支撑。

Research on Lightweight Model of On-orbit Optical Object Detection Based on Depth Separable Convolution
Abstract:

1)Objective: As an important transport carrier and military target, aircraft detection in remote sensing images is of great significance in aircraft rescue, early warning and other fields. At present, the widely used neural network model has a complex structure and a large amount of parameters, which occupies a lot of limited computing and storage resources on the aircraft detection satellite. Considering the efficiency and accuracy of satellite in-orbit detection, the lightweight operation of neural network can reduce the amount of computation and compress the overall framework of neural network by optimizing the computational structure. 2)Method: In this paper, based on deep separable convolution neural network, Based on deeply separable convolution, the SwishBlcok bottleneck module is established by referring to the construction idea of reverse residual structure, and the characteristics of the network are simultaneously expanded from three aspects. ResBlock_body is replaced as the overall design idea of the main framework of Yolov4. At the same time, the Channel Attention thought of SENet is used for reference and integrated into the network structure. Different weights are given to the extracted feature maps and information. On the premise of keeping channel separation, separable convolution structure is used to improve SPP structure and PANet structure, so that the amount of the model parameter and the memory dependence is reduced. Meanwhile, the convolution layer and the Batch Normalization layer are merged to further speed up forward reasoning. Drawing on the Focal loss function, the loss function of object detection is improved to solve the imbalance between foreground and background data samples. 3)Result: In order to verify the quality of algorithm restoration, objective evaluation indexes are used to measure the algorithm from multiple angles. The public dataset RSOD and self-made dataset are used to compare the high-performance network model for algorithm verification. Meanwhile, in order to verify the rationality of various improvements of the network model, the verification experiments are carried out in the form of steps to measure the quality and processing speed of the algorithm. Then, the trained model is deployed to an embedded platform to verify the detection speed of the improved Yolov4 algorithm model for on-orbit object recognition. Compared with the original method, the parameter amount is reduced by about 7 times, and FLOPs is reduced by about 30 times on the premise of ensuring the recognition accuracy 94.09%. Furthermore, the experimental results compared with the Yolo series, SSD, MobileNet, CenterNet and other cutting-edge network models once again prove the performance of the algorithm. 4)Conclusion: An on-orbit object detection model is proposed to solve the limitation of computing resources and storage resources, which cannot support the high-precision complex model. Experimental results from ground platform and embedded platform prove that the on-orbit object detection algorithm proposed can effectively detect remote sensing targets by taking into account the detection performance. In future research, it is still necessary to expand the scale of remote sensing data sets and improve the universality of the model application scenarios comprehensively.

本文暂时没有被引用!

欢迎关注学报微信

遥感学报交流群