Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined With CNN
Convolutional neural networks (CNN) with their deep feature extraction capability have recently been applied in numerous image fusion tasks. However, the image fusion of infrared and visible images leads to loss of fine details and degradation of contrast in the fused image. This deterioration in th...
Main Authors: | Lei Yan, Jie Cao, Saad Rizvi, Kaiyu Zhang, Qun Hao, Xuemin Cheng |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9044861/ |
Similar Items
-
Fusion of visual salience maps for object acquisition
by: Shlomo Greenberg, et al.
Published: (2020-06-01) -
Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement
by: Yuanyuan Liu, et al.
Published: (2022-08-01) -
The Effect of Linguistic and Visual Salience in Visual World Studies
by: Federica eCavicchio, et al.
Published: (2014-03-01) -
An Efficient Method for Infrared and Visual Images Fusion Based on Visual Attention Technique
by: Yaochen Liu, et al.
Published: (2020-02-01) -
Augmented Grad-CAM++: Super-Resolution Saliency Maps for Visual Interpretation of Deep Neural Network
by: Yongshun Gao, et al.
Published: (2023-11-01)