Deep Learning L2 Norm Fusion for Infrared & Visible Images
Fusion is a strategy for collecting data from multiple images in order to improve information quality. Infrared images can recognise objects from their surroundings depending mostly on radiation disparity, which works better in all weather conditions as well as irrespective of whether it is day or n...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9748135/ |
_version_ | 1819046199701798912 |
---|---|
author | H. Shihabudeen J. Rajeesh |
author_facet | H. Shihabudeen J. Rajeesh |
author_sort | H. Shihabudeen |
collection | DOAJ |
description | Fusion is a strategy for collecting data from multiple images in order to improve information quality. Infrared images can recognise objects from their surroundings depending mostly on radiation disparity, which works better in all weather conditions as well as irrespective of whether it is day or night. Visible images can integrate texture information with great visual precision and in detail that matches with human visual system. Integrating the benefits of thermal radiation information with precise visual information from infrared and visible modalities is a good idea. The presented algorithm utilises the <inline-formula> <tex-math notation="LaTeX">$\ell _{2} $ </tex-math></inline-formula> norm and a combination of residual networks for combining the complementary information from both image modalities. The encoder consist of convolutional layers with selected residual connections in which the output of each layer is associated with each other layer. The <inline-formula> <tex-math notation="LaTeX">$\ell _{2} $ </tex-math></inline-formula> norm approach is then used to fuse the two featuremaps. At last, decoder recreates the fused image. The large mutual information value of 14.85084 indicates more complementary information retained in the fused image than in the infrared and visible images. The large entropy value of 6.92286 indicates more information content in the fused image and the fused image is equipped with more edge information. The proposed architecture collect more pixel values from both infrared and visible image and the fused image looks more natural as it contain more textual content. The proposed system accomplishes a noteworthy performance with the existing models. |
first_indexed | 2024-12-21T10:40:40Z |
format | Article |
id | doaj.art-9c413dbad5d3459ebc7ee1728153e529 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-21T10:40:40Z |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-9c413dbad5d3459ebc7ee1728153e5292022-12-21T19:06:56ZengIEEEIEEE Access2169-35362022-01-0110368843689410.1109/ACCESS.2022.31644269748135Deep Learning L2 Norm Fusion for Infrared & Visible ImagesH. Shihabudeen0https://orcid.org/0000-0002-1065-7191J. Rajeesh1https://orcid.org/0000-0002-1236-7868College of Engineering Thalassery, APJ Abdul Kalam Technological University, Thalassery, Kerala, IndiaCollege of Engineering Kidangoor, Kottayam, Kerala, IndiaFusion is a strategy for collecting data from multiple images in order to improve information quality. Infrared images can recognise objects from their surroundings depending mostly on radiation disparity, which works better in all weather conditions as well as irrespective of whether it is day or night. Visible images can integrate texture information with great visual precision and in detail that matches with human visual system. Integrating the benefits of thermal radiation information with precise visual information from infrared and visible modalities is a good idea. The presented algorithm utilises the <inline-formula> <tex-math notation="LaTeX">$\ell _{2} $ </tex-math></inline-formula> norm and a combination of residual networks for combining the complementary information from both image modalities. The encoder consist of convolutional layers with selected residual connections in which the output of each layer is associated with each other layer. The <inline-formula> <tex-math notation="LaTeX">$\ell _{2} $ </tex-math></inline-formula> norm approach is then used to fuse the two featuremaps. At last, decoder recreates the fused image. The large mutual information value of 14.85084 indicates more complementary information retained in the fused image than in the infrared and visible images. The large entropy value of 6.92286 indicates more information content in the fused image and the fused image is equipped with more edge information. The proposed architecture collect more pixel values from both infrared and visible image and the fused image looks more natural as it contain more textual content. The proposed system accomplishes a noteworthy performance with the existing models.https://ieeexplore.ieee.org/document/9748135/Artificial neural networksfusioninfraredneural networksvisible |
spellingShingle | H. Shihabudeen J. Rajeesh Deep Learning L2 Norm Fusion for Infrared & Visible Images IEEE Access Artificial neural networks fusion infrared neural networks visible |
title | Deep Learning L2 Norm Fusion for Infrared & Visible Images |
title_full | Deep Learning L2 Norm Fusion for Infrared & Visible Images |
title_fullStr | Deep Learning L2 Norm Fusion for Infrared & Visible Images |
title_full_unstemmed | Deep Learning L2 Norm Fusion for Infrared & Visible Images |
title_short | Deep Learning L2 Norm Fusion for Infrared & Visible Images |
title_sort | deep learning l2 norm fusion for infrared x0026 visible images |
topic | Artificial neural networks fusion infrared neural networks visible |
url | https://ieeexplore.ieee.org/document/9748135/ |
work_keys_str_mv | AT hshihabudeen deeplearningl2normfusionforinfraredx0026visibleimages AT jrajeesh deeplearningl2normfusionforinfraredx0026visibleimages |