首页 >  , Vol. , Issue () : -

摘要

全文摘要次数: 116 全文下载次数: 22
引用本文:

DOI:

10.11834/jrs.20254505

收稿日期:

2024-11-05

修改日期:

2025-03-28

PDF Free   EndNote   BibTeX
基于多模态一致性的遥感影像去云
朱淑丹1, 雷帆2, 张利军1, 杨敏;1, 杨凯钧2, 魏继德2, 冯如意3,3
1.湖南省遥感地质调查监测所;2.湖南省第二测绘院;3.中国地质大学武汉 武汉
摘要:

云层遮挡是光学遥感影像处理中长期存在的挑战,传统去云方法常难以全面恢复被遮挡区域的细节信息,影响影像质量。为应对这一问题,本文提出了一种基于多模态特征一致性融合 (Cloud-Harmonizer) 的去云方法,充分利用SAR与光学影像在地物特征表征上的互补性和一致性,实现云区信息的有效修复。该方法由三个核心模块组成: 多模态特征一致性模块 (Multi-modal Feature Consistency Module, MFCM) 用于对齐光学与SAR特征,捕捉并生成云层区域的差异性注意力;一致性约束补偿模块 (Consistency-Constrained Compensation Module, CCCM) 根据差异性注意力引导SAR数据补偿光学影像的缺失特征;多模态协同自适应融合模块 (Multi-modal Collaborative Adaptive Fusion Module, MCAF) 通过自适应融合策略进一步整合两模态特征,提升整体修复效果。在SEN12MS-CR数据集上的实验结果表明,该方法在PSNR、SSIM和SAM指标上分别取得了30.0408、0.9004和7.6068的优异表现,优于当前先进方法。实验结果表明,本文方法在云层去除和特征修复方面展现出较强的潜力,为多模态遥感数据的融合应用与去云技术的发展提供了有益参考。

Cloud Removal in Remote Sensing Imagery Based on Multimodal Consistency
Abstract:

Objective: Cloud occlusion remains a persistent challenge in optical remote sensing imagery, as conventional cloud removal methods often fail to fully restore details in occluded areas, leading to degraded image quality. Clouds not only obscure critical ground information but also introduce noise and artifacts during reconstruction, limiting the imagery"s utility for applications such as land cover monitoring, disaster assessment, and environmental studies. To address this issue, this paper proposes a cloud removal approach based on multi-modal feature consistency fusion (Cloud-Harmonizer). The framework leverages the complementary characteristics and consistency between Synthetic Aperture Radar (SAR) and optical imagery to effectively restore cloud-occluded regions and generate high-quality reconstructed optical images. Unlike traditional methods relying solely on temporal or spatial interpolation, this approach capitalizes on the inherent advantages of SAR data (which is unaffected by cloud cover) to guide the reconstruction process and ensure the authenticity of restored areas. By integrating multi-modal data, the method aims to improve both structural and spectral recovery in cloud-affected images. Method: The Cloud-Harmonizer framework comprises three core modules for feature extraction, alignment, and fusion of SAR and optical images. The Multi-modal Feature Consistency Module (MFCM) maps features from both modalities to a shared vector space and generates modality difference attention to locate cloud-affected regions, thereby ensuring compatibility between the feature representations of both modalities for precise occlusion identification. The Consistency-Constrained Compensation Module (CCCM) utilizes difference attention to guide SAR data in compensating for missing features in optical imagery, enabling reconstruction that closely resembles the actual scene. The Multi-modal Collaborative Adaptive Fusion Module (MCAF) employs self-attention-based adaptive fusion strategies to optimize the integration of both modalities and enhance overall reconstruction quality. This modular design enables accurate compensation and robust feature fusion under various environmental conditions, including dense cloud coverage and complex terrain. The framework dynamically adjusts the fusion process according to input data characteristics, making it suitable for diverse remote sensing scenarios. Result: Experiments conducted on the SEN12MS-CR dataset validate the method"s effectiveness. The Cloud-Harmonizer framework achieves a Peak Signal-to-Noise Ratio (PSNR) of 30.0408, Structural Similarity Index (SSIM) of 0.9004, and Spectral Angle Mapper (SAM) of 7.6068, demonstrating improvements over existing cloud removal methods. These quantitative results indicate the model"s capability to recover detailed information while maintaining structural and spectral consistency in reconstructed images. Comparative analyses with existing methods show that the proposed approach effectively preserves textures, edges, and other details while reducing artifacts in cloud-occluded regions. Qualitative evaluations further confirm that the reconstructed images exhibit natural visual appearance, validating the framework"s robustness. Conclusion: Experimental results demonstrate the potential of the Cloud-Harmonizer framework for cloud removal and feature restoration in optical remote sensing imagery. By effectively utilizing multi-modal data fusion, the method addresses cloud occlusion challenges while enhancing feature consistency between SAR and optical modalities. The approach benefits from the complementary characteristics of both data types, achieving accurate reconstruction of occluded areas while maintaining image quality. The framework"s modular and adaptive design provides a foundation for exploring more sophisticated fusion strategies and extending applications to other remote sensing challenges. With growing demand for high-quality remote sensing data, Cloud-Harmonizer may serve as a viable solution for improving the usability of optical imagery in cloud-prone environments.

本文暂时没有被引用!

欢迎关注学报微信

遥感学报交流群