Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform

This study based on co-occurrence analysis shearlet transform (CAST) effectively combines the latent low rank representation (LatLRR) and the regularization of zero-crossing counting in differences to fuse the heterogeneous images. First, the source images are decomposed by CAST method into base-lay...

Full description

Bibliographic Details
Main Authors: Biao Qi, Longxu Jin, Guoning Li, Yu Zhang, Qiang Li, Guoling Bi, Wenhua Wang
Format: Article
Language:English
Published: MDPI AG 2022-01-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/14/2/283
_version_ 1797490713015877632
author Biao Qi
Longxu Jin
Guoning Li
Yu Zhang
Qiang Li
Guoling Bi
Wenhua Wang
author_facet Biao Qi
Longxu Jin
Guoning Li
Yu Zhang
Qiang Li
Guoling Bi
Wenhua Wang
author_sort Biao Qi
collection DOAJ
description This study based on co-occurrence analysis shearlet transform (CAST) effectively combines the latent low rank representation (LatLRR) and the regularization of zero-crossing counting in differences to fuse the heterogeneous images. First, the source images are decomposed by CAST method into base-layer and detail-layer sub-images. Secondly, for the base-layer components with larger-scale intensity variation, the LatLRR, is a valid method to extract the salient information from image sources, and can be applied to generate saliency map to implement the weighted fusion of base-layer images adaptively. Meanwhile, the regularization term of zero crossings in differences, which is a classic method of optimization, is designed as the regularization term to construct the fusion of detail-layer images. By this method, the gradient information concealed in the source images can be extracted as much as possible, then the fusion image owns more abundant edge information. Compared with other state-of-the-art algorithms on publicly available datasets, the quantitative and qualitative analysis of experimental results demonstrate that the proposed method outperformed in enhancing the contrast and achieving close fusion result.
first_indexed 2024-03-10T00:36:58Z
format Article
id doaj.art-0894d7df8a3d4b69a5a641a7037a5456
institution Directory Open Access Journal
issn 2072-4292
language English
last_indexed 2024-03-10T00:36:58Z
publishDate 2022-01-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj.art-0894d7df8a3d4b69a5a641a7037a54562023-11-23T15:15:03ZengMDPI AGRemote Sensing2072-42922022-01-0114228310.3390/rs14020283Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet TransformBiao Qi0Longxu Jin1Guoning Li2Yu Zhang3Qiang Li4Guoling Bi5Wenhua Wang6Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaChangchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaChangchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaChangchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaChangchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaChangchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaSchool of Instrument Science and Electrical Engineering, Jilin University, Changchun 130012, ChinaThis study based on co-occurrence analysis shearlet transform (CAST) effectively combines the latent low rank representation (LatLRR) and the regularization of zero-crossing counting in differences to fuse the heterogeneous images. First, the source images are decomposed by CAST method into base-layer and detail-layer sub-images. Secondly, for the base-layer components with larger-scale intensity variation, the LatLRR, is a valid method to extract the salient information from image sources, and can be applied to generate saliency map to implement the weighted fusion of base-layer images adaptively. Meanwhile, the regularization term of zero crossings in differences, which is a classic method of optimization, is designed as the regularization term to construct the fusion of detail-layer images. By this method, the gradient information concealed in the source images can be extracted as much as possible, then the fusion image owns more abundant edge information. Compared with other state-of-the-art algorithms on publicly available datasets, the quantitative and qualitative analysis of experimental results demonstrate that the proposed method outperformed in enhancing the contrast and achieving close fusion result.https://www.mdpi.com/2072-4292/14/2/283image fusionco-occurrence analysis shearlet transformlatent low-rank representationregularization of zero-crossing counting in differences
spellingShingle Biao Qi
Longxu Jin
Guoning Li
Yu Zhang
Qiang Li
Guoling Bi
Wenhua Wang
Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform
Remote Sensing
image fusion
co-occurrence analysis shearlet transform
latent low-rank representation
regularization of zero-crossing counting in differences
title Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform
title_full Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform
title_fullStr Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform
title_full_unstemmed Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform
title_short Infrared and Visible Image Fusion Based on Co-Occurrence Analysis Shearlet Transform
title_sort infrared and visible image fusion based on co occurrence analysis shearlet transform
topic image fusion
co-occurrence analysis shearlet transform
latent low-rank representation
regularization of zero-crossing counting in differences
url https://www.mdpi.com/2072-4292/14/2/283
work_keys_str_mv AT biaoqi infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform
AT longxujin infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform
AT guoningli infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform
AT yuzhang infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform
AT qiangli infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform
AT guolingbi infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform
AT wenhuawang infraredandvisibleimagefusionbasedoncooccurrenceanalysisshearlettransform