DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image

In remote sensing, the fusion of infrared and visible images is one of the common means of data processing. Its aim is to synthesize one fused image with abundant common and differential information from the source images. At present, the fusion methods based on deep learning are widely employed in...

Full description

Bibliographic Details
Main Authors: Qin Pu, Abdellah Chehri, Gwanggil Jeon, Lei Zhang, Xiaomin Yang
Format: Article
Language:English
Published: MDPI AG 2022-12-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/15/1/144
_version_ 1797431263197396992
author Qin Pu
Abdellah Chehri
Gwanggil Jeon
Lei Zhang
Xiaomin Yang
author_facet Qin Pu
Abdellah Chehri
Gwanggil Jeon
Lei Zhang
Xiaomin Yang
author_sort Qin Pu
collection DOAJ
description In remote sensing, the fusion of infrared and visible images is one of the common means of data processing. Its aim is to synthesize one fused image with abundant common and differential information from the source images. At present, the fusion methods based on deep learning are widely employed in this work. However, the existing fusion network with deep learning fails to effectively integrate common and differential information for source images. To alleviate the problem, we propose a dual-head fusion strategy and contextual information awareness fusion network (DCFusion) to preserve more meaningful information from source images. Firstly, we extract multi-scale features for the source images with multiple convolution and pooling layers. Then, we propose a dual-headed fusion strategy (DHFS) to fuse different modal features from the encoder. The DHFS can effectively preserve common and differential information for different modal features. Finally, we propose a contextual information awareness module (CIAM) to reconstruct the fused image. The CIAM can adequately exchange information from different scale features and improve fusion performance. Furthermore, the whole network was tested on MSRS and TNO datasets. The results of extensive experiments prove that our proposed network achieves good performance in target maintenance and texture preservation for fusion images.
first_indexed 2024-03-09T09:41:24Z
format Article
id doaj.art-1ccb3f2f8e6540889c4cd5265bf94485
institution Directory Open Access Journal
issn 2072-4292
language English
last_indexed 2024-03-09T09:41:24Z
publishDate 2022-12-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj.art-1ccb3f2f8e6540889c4cd5265bf944852023-12-02T00:51:14ZengMDPI AGRemote Sensing2072-42922022-12-0115114410.3390/rs15010144DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing ImageQin Pu0Abdellah Chehri1Gwanggil Jeon2Lei Zhang3Xiaomin Yang4College of Electronics and Information Engineering, Sichuan University, Chengdu 610064, ChinaDepartment of Mathematics and Computer Science, Royal Military College of Canada, Kingston, ON K7K 7B4, CanadaDepartment of Embedded Systems Engineering, Incheon National UniversityAcademyro-119, Incheon 22012, Republic of KoreaCollege of Electronics and Information Engineering, Sichuan University, Chengdu 610064, ChinaCollege of Electronics and Information Engineering, Sichuan University, Chengdu 610064, ChinaIn remote sensing, the fusion of infrared and visible images is one of the common means of data processing. Its aim is to synthesize one fused image with abundant common and differential information from the source images. At present, the fusion methods based on deep learning are widely employed in this work. However, the existing fusion network with deep learning fails to effectively integrate common and differential information for source images. To alleviate the problem, we propose a dual-head fusion strategy and contextual information awareness fusion network (DCFusion) to preserve more meaningful information from source images. Firstly, we extract multi-scale features for the source images with multiple convolution and pooling layers. Then, we propose a dual-headed fusion strategy (DHFS) to fuse different modal features from the encoder. The DHFS can effectively preserve common and differential information for different modal features. Finally, we propose a contextual information awareness module (CIAM) to reconstruct the fused image. The CIAM can adequately exchange information from different scale features and improve fusion performance. Furthermore, the whole network was tested on MSRS and TNO datasets. The results of extensive experiments prove that our proposed network achieves good performance in target maintenance and texture preservation for fusion images.https://www.mdpi.com/2072-4292/15/1/144image fusioninfrared imagevisible imagetarget maintenancetexture preservation
spellingShingle Qin Pu
Abdellah Chehri
Gwanggil Jeon
Lei Zhang
Xiaomin Yang
DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image
Remote Sensing
image fusion
infrared image
visible image
target maintenance
texture preservation
title DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image
title_full DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image
title_fullStr DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image
title_full_unstemmed DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image
title_short DCFusion: Dual-Headed Fusion Strategy and Contextual Information Awareness for Infrared and Visible Remote Sensing Image
title_sort dcfusion dual headed fusion strategy and contextual information awareness for infrared and visible remote sensing image
topic image fusion
infrared image
visible image
target maintenance
texture preservation
url https://www.mdpi.com/2072-4292/15/1/144
work_keys_str_mv AT qinpu dcfusiondualheadedfusionstrategyandcontextualinformationawarenessforinfraredandvisibleremotesensingimage
AT abdellahchehri dcfusiondualheadedfusionstrategyandcontextualinformationawarenessforinfraredandvisibleremotesensingimage
AT gwanggiljeon dcfusiondualheadedfusionstrategyandcontextualinformationawarenessforinfraredandvisibleremotesensingimage
AT leizhang dcfusiondualheadedfusionstrategyandcontextualinformationawarenessforinfraredandvisibleremotesensingimage
AT xiaominyang dcfusiondualheadedfusionstrategyandcontextualinformationawarenessforinfraredandvisibleremotesensingimage