Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain
With the advent of deep learning, significant progress has been made in low-light image enhancement methods. However, deep learning requires enormous paired training data, which is challenging to capture in real-world scenarios. To address this limitation, this paper presents a novel unsupervised lo...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/15/14/3580 |
_version_ | 1827731892269481984 |
---|---|
author | Xupei Zhang Hanlin Qin Yue Yu Xiang Yan Shanglin Yang Guanghao Wang |
author_facet | Xupei Zhang Hanlin Qin Yue Yu Xiang Yan Shanglin Yang Guanghao Wang |
author_sort | Xupei Zhang |
collection | DOAJ |
description | With the advent of deep learning, significant progress has been made in low-light image enhancement methods. However, deep learning requires enormous paired training data, which is challenging to capture in real-world scenarios. To address this limitation, this paper presents a novel unsupervised low-light image enhancement method, which first introduces the frequency-domain features of images in low-light image enhancement tasks. Our work is inspired by imagining a digital image as a spatially varying metaphoric “field of light”, then subjecting the influence of physical processes such as diffraction and coherent detection back onto the original image space via a frequency-domain to spatial-domain transformation (inverse Fourier transform). However, the mathematical model created by this physical process still requires complex manual tuning of the parameters for different scene conditions to achieve the best adjustment. Therefore, we proposed a dual-branch convolution network to estimate pixel-wise and high-order spatial interactions for dynamic range adjustment of the frequency feature of the given low-light image. Guided by the frequency feature from the “field of light” and parameter estimation networks, our method enables dynamic enhancement of low-light images. Extensive experiments have shown that our method performs well compared to state-of-the-art unsupervised methods, and its performance approximates the level of the state-of-the-art supervised methods qualitatively and quantitatively. At the same time, the light network structure design allows the proposed method to have extremely fast inference speed (near 150 FPS on an NVIDIA 3090 Ti GPU for an image of size <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>600</mn><mo>×</mo><mn>400</mn><mo>×</mo><mn>3</mn></mrow></semantics></math></inline-formula>). Furthermore, the potential benefits of our method to object detection in the dark are discussed. |
first_indexed | 2024-03-11T00:41:07Z |
format | Article |
id | doaj.art-7ef7220041024106bc30238ed50663bc |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-03-11T00:41:07Z |
publishDate | 2023-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-7ef7220041024106bc30238ed50663bc2023-11-18T21:12:49ZengMDPI AGRemote Sensing2072-42922023-07-011514358010.3390/rs15143580Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency DomainXupei Zhang0Hanlin Qin1Yue Yu2Xiang Yan3Shanglin Yang4Guanghao Wang5School of Optoelectronic Engineering, Xidian University, Xi’an 710071, ChinaSchool of Optoelectronic Engineering, Xidian University, Xi’an 710071, ChinaSchool of Optoelectronic Engineering, Xidian University, Xi’an 710071, ChinaSchool of Optoelectronic Engineering, Xidian University, Xi’an 710071, ChinaSchool of Optoelectronic Engineering, Xidian University, Xi’an 710071, ChinaSchool of Optoelectronic Engineering, Xidian University, Xi’an 710071, ChinaWith the advent of deep learning, significant progress has been made in low-light image enhancement methods. However, deep learning requires enormous paired training data, which is challenging to capture in real-world scenarios. To address this limitation, this paper presents a novel unsupervised low-light image enhancement method, which first introduces the frequency-domain features of images in low-light image enhancement tasks. Our work is inspired by imagining a digital image as a spatially varying metaphoric “field of light”, then subjecting the influence of physical processes such as diffraction and coherent detection back onto the original image space via a frequency-domain to spatial-domain transformation (inverse Fourier transform). However, the mathematical model created by this physical process still requires complex manual tuning of the parameters for different scene conditions to achieve the best adjustment. Therefore, we proposed a dual-branch convolution network to estimate pixel-wise and high-order spatial interactions for dynamic range adjustment of the frequency feature of the given low-light image. Guided by the frequency feature from the “field of light” and parameter estimation networks, our method enables dynamic enhancement of low-light images. Extensive experiments have shown that our method performs well compared to state-of-the-art unsupervised methods, and its performance approximates the level of the state-of-the-art supervised methods qualitatively and quantitatively. At the same time, the light network structure design allows the proposed method to have extremely fast inference speed (near 150 FPS on an NVIDIA 3090 Ti GPU for an image of size <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>600</mn><mo>×</mo><mn>400</mn><mo>×</mo><mn>3</mn></mrow></semantics></math></inline-formula>). Furthermore, the potential benefits of our method to object detection in the dark are discussed.https://www.mdpi.com/2072-4292/15/14/3580low-light image enhancementunsupervised learningphysics-inspired computer vision |
spellingShingle | Xupei Zhang Hanlin Qin Yue Yu Xiang Yan Shanglin Yang Guanghao Wang Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain Remote Sensing low-light image enhancement unsupervised learning physics-inspired computer vision |
title | Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain |
title_full | Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain |
title_fullStr | Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain |
title_full_unstemmed | Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain |
title_short | Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain |
title_sort | unsupervised low light image enhancement via virtual diffraction information in frequency domain |
topic | low-light image enhancement unsupervised learning physics-inspired computer vision |
url | https://www.mdpi.com/2072-4292/15/14/3580 |
work_keys_str_mv | AT xupeizhang unsupervisedlowlightimageenhancementviavirtualdiffractioninformationinfrequencydomain AT hanlinqin unsupervisedlowlightimageenhancementviavirtualdiffractioninformationinfrequencydomain AT yueyu unsupervisedlowlightimageenhancementviavirtualdiffractioninformationinfrequencydomain AT xiangyan unsupervisedlowlightimageenhancementviavirtualdiffractioninformationinfrequencydomain AT shanglinyang unsupervisedlowlightimageenhancementviavirtualdiffractioninformationinfrequencydomain AT guanghaowang unsupervisedlowlightimageenhancementviavirtualdiffractioninformationinfrequencydomain |