Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework

The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rat...

Full description

Bibliographic Details
Main Authors: Shuozhi Wang, Jianqiang Mei, Lichao Yang, Yifan Zhao
Format: Article
Language:English
Published: MDPI AG 2021-11-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/22/7471
_version_ 1797508516775198720
author Shuozhi Wang
Jianqiang Mei
Lichao Yang
Yifan Zhao
author_facet Shuozhi Wang
Jianqiang Mei
Lichao Yang
Yifan Zhao
author_sort Shuozhi Wang
collection DOAJ
description The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rate required. Therefore, there is a strong demand to improve the quality of IR images, particularly on edges, without upgrading the hardware in the context of surveillance and industrial inspection systems. This paper proposes a novel Conditional Generative Adversarial Networks (CGAN)-based framework to enhance IR edges by learning high-frequency features from corresponding visual images. A dual-discriminator, focusing on edge and content/background, is introduced to guide the cross imaging modality learning procedure of the U-Net generator in high and low frequencies respectively. Results demonstrate that the proposed framework can effectively enhance barely visible edges in IR images without introducing artefacts, meanwhile the content information is well preserved. Different from most similar studies, this method only requires IR images for testing, which will increase the applicability of some scenarios where only one imaging modality is available, such as active thermography.
first_indexed 2024-03-10T05:05:03Z
format Article
id doaj.art-7b02ecd9a61a4b4a9a3b12daa3483b8f
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T05:05:03Z
publishDate 2021-11-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-7b02ecd9a61a4b4a9a3b12daa3483b8f2023-11-23T01:23:51ZengMDPI AGSensors1424-82202021-11-012122747110.3390/s21227471Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) FrameworkShuozhi Wang0Jianqiang Mei1Lichao Yang2Yifan Zhao3School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UKSchool of Electronic Engineering, Tianjin University of Technology and Education, Tianjin 300222, ChinaSchool of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UKSchool of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UKThe measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rate required. Therefore, there is a strong demand to improve the quality of IR images, particularly on edges, without upgrading the hardware in the context of surveillance and industrial inspection systems. This paper proposes a novel Conditional Generative Adversarial Networks (CGAN)-based framework to enhance IR edges by learning high-frequency features from corresponding visual images. A dual-discriminator, focusing on edge and content/background, is introduced to guide the cross imaging modality learning procedure of the U-Net generator in high and low frequencies respectively. Results demonstrate that the proposed framework can effectively enhance barely visible edges in IR images without introducing artefacts, meanwhile the content information is well preserved. Different from most similar studies, this method only requires IR images for testing, which will increase the applicability of some scenarios where only one imaging modality is available, such as active thermography.https://www.mdpi.com/1424-8220/21/22/7471image enhancementedge detectiondeep learningthermography
spellingShingle Shuozhi Wang
Jianqiang Mei
Lichao Yang
Yifan Zhao
Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
Sensors
image enhancement
edge detection
deep learning
thermography
title Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
title_full Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
title_fullStr Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
title_full_unstemmed Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
title_short Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
title_sort infer thermal information from visual information a cross imaging modality edge learning cimel framework
topic image enhancement
edge detection
deep learning
thermography
url https://www.mdpi.com/1424-8220/21/22/7471
work_keys_str_mv AT shuozhiwang inferthermalinformationfromvisualinformationacrossimagingmodalityedgelearningcimelframework
AT jianqiangmei inferthermalinformationfromvisualinformationacrossimagingmodalityedgelearningcimelframework
AT lichaoyang inferthermalinformationfromvisualinformationacrossimagingmodalityedgelearningcimelframework
AT yifanzhao inferthermalinformationfromvisualinformationacrossimagingmodalityedgelearningcimelframework