An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention

This paper presents an image fusion network based on a special residual network and attention mechanism. Compared with the traditional fusion network, the image fusion network has the advantages of an end-to-end network and integrates the feature extraction advantages of the attention mechanism resi...

Full description

Bibliographic Details
Main Authors: Yang Li, Haitao Yang, Jinyu Wang, Changgong Zhang, Zhengjun Liu, Hang Chen
Format: Article
Language:English
Published: MDPI AG 2022-09-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/11/19/3140
_version_ 1797479791470837760
author Yang Li
Haitao Yang
Jinyu Wang
Changgong Zhang
Zhengjun Liu
Hang Chen
author_facet Yang Li
Haitao Yang
Jinyu Wang
Changgong Zhang
Zhengjun Liu
Hang Chen
author_sort Yang Li
collection DOAJ
description This paper presents an image fusion network based on a special residual network and attention mechanism. Compared with the traditional fusion network, the image fusion network has the advantages of an end-to-end network and integrates the feature extraction advantages of the attention mechanism residual network. It overcomes the shortcomings of the traditional network that need complex design rules and manual operation. In this method, hierarchical feature fusion is used to achieve effective fusion. A combined loss function is designed to optimize training results and improve image fusion quality. This paper uses many qualitative and quantitative experimental analyses on different data sets. The results show that, compared with the comparison algorithm, the method in this paper has a stronger retention ability of infrared and visible light information and better indexes. 72% of eleven indexes compared with some images in the public TNO data set are optimal or sub-optimal, and 80% are optimal or suboptimal in the RoadScene data set, which is much higher than other algorithms. The overall fusion effect is more in line with human visual perception.
first_indexed 2024-03-09T21:50:55Z
format Article
id doaj.art-68cced07c5e94feba890320785e6ba0a
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-03-09T21:50:55Z
publishDate 2022-09-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-68cced07c5e94feba890320785e6ba0a2023-11-23T20:06:59ZengMDPI AGElectronics2079-92922022-09-011119314010.3390/electronics11193140An Image Fusion Method Based on Special Residual Network and Efficient Channel AttentionYang Li0Haitao Yang1Jinyu Wang2Changgong Zhang3Zhengjun Liu4Hang Chen5School of Space Information, Space Engineering University, Beijing 101416, ChinaSpace Security Research Center, Space Engineering University, Beijing 101416, ChinaSchool of Space Information, Space Engineering University, Beijing 101416, ChinaSchool of Space Information, Space Engineering University, Beijing 101416, ChinaSchool of Physics, Harbin Institute of Technology, Harbin 150001, ChinaSchool of Space Information, Space Engineering University, Beijing 101416, ChinaThis paper presents an image fusion network based on a special residual network and attention mechanism. Compared with the traditional fusion network, the image fusion network has the advantages of an end-to-end network and integrates the feature extraction advantages of the attention mechanism residual network. It overcomes the shortcomings of the traditional network that need complex design rules and manual operation. In this method, hierarchical feature fusion is used to achieve effective fusion. A combined loss function is designed to optimize training results and improve image fusion quality. This paper uses many qualitative and quantitative experimental analyses on different data sets. The results show that, compared with the comparison algorithm, the method in this paper has a stronger retention ability of infrared and visible light information and better indexes. 72% of eleven indexes compared with some images in the public TNO data set are optimal or sub-optimal, and 80% are optimal or suboptimal in the RoadScene data set, which is much higher than other algorithms. The overall fusion effect is more in line with human visual perception.https://www.mdpi.com/2079-9292/11/19/3140codec networkdeep learningimage fusionattention mechanism
spellingShingle Yang Li
Haitao Yang
Jinyu Wang
Changgong Zhang
Zhengjun Liu
Hang Chen
An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention
Electronics
codec network
deep learning
image fusion
attention mechanism
title An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention
title_full An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention
title_fullStr An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention
title_full_unstemmed An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention
title_short An Image Fusion Method Based on Special Residual Network and Efficient Channel Attention
title_sort image fusion method based on special residual network and efficient channel attention
topic codec network
deep learning
image fusion
attention mechanism
url https://www.mdpi.com/2079-9292/11/19/3140
work_keys_str_mv AT yangli animagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT haitaoyang animagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT jinyuwang animagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT changgongzhang animagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT zhengjunliu animagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT hangchen animagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT yangli imagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT haitaoyang imagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT jinyuwang imagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT changgongzhang imagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT zhengjunliu imagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention
AT hangchen imagefusionmethodbasedonspecialresidualnetworkandefficientchannelattention