TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks

In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming...

Full description

Bibliographic Details
Main Authors: Zhiguang Yang, Shan Zeng
Format: Article
Language:English
Published: MDPI AG 2022-02-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/24/2/294
_version_ 1827655076674535424
author Zhiguang Yang
Shan Zeng
author_facet Zhiguang Yang
Shan Zeng
author_sort Zhiguang Yang
collection DOAJ
description In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods.
first_indexed 2024-03-09T22:01:39Z
format Article
id doaj.art-82d0dd00fbbb4fd4afb7f3eb19409496
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-09T22:01:39Z
publishDate 2022-02-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-82d0dd00fbbb4fd4afb7f3eb194094962023-11-23T19:49:04ZengMDPI AGEntropy1099-43002022-02-0124229410.3390/e24020294TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense NetworksZhiguang Yang0Shan Zeng1School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan 430023, ChinaSchool of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan 430023, ChinaIn this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods.https://www.mdpi.com/1099-4300/24/2/294infrared and visible image fusiontexture preservingdensely connected network
spellingShingle Zhiguang Yang
Shan Zeng
TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
Entropy
infrared and visible image fusion
texture preserving
densely connected network
title TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_full TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_fullStr TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_full_unstemmed TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_short TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_sort tpfusion texture preserving fusion of infrared and visible images via dense networks
topic infrared and visible image fusion
texture preserving
densely connected network
url https://www.mdpi.com/1099-4300/24/2/294
work_keys_str_mv AT zhiguangyang tpfusiontexturepreservingfusionofinfraredandvisibleimagesviadensenetworks
AT shanzeng tpfusiontexturepreservingfusionofinfraredandvisibleimagesviadensenetworks