Low-Light-Level Image Super-Resolution Reconstruction Based on a Multi-Scale Features Extraction Network

Wide field-of-view (FOV) and high-resolution (HR) imaging are essential to many applications where high-content image acquisition is necessary. However, due to the insufficient spatial sampling of the image detector and the trade-off between pixel size and photosensitivity, the ability of current im...

Full description

Bibliographic Details
Main Authors: Bowen Wang, Yan Zou, Linfei Zhang, Yan Hu, Hao Yan, Chao Zuo, Qian Chen
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Photonics
Subjects:
Online Access:https://www.mdpi.com/2304-6732/8/8/321
Description
Summary:Wide field-of-view (FOV) and high-resolution (HR) imaging are essential to many applications where high-content image acquisition is necessary. However, due to the insufficient spatial sampling of the image detector and the trade-off between pixel size and photosensitivity, the ability of current imaging sensors to obtain high spatial resolution is limited, especially under low-light-level (LLL) imaging conditions. To solve these problems, we propose a multi-scale feature extraction (MSFE) network to realize pixel-super-resolved LLL imaging. In order to perform data fusion and information extraction for low resolution (LR) images, the network extracts high-frequency detail information from different dimensions by combining the channel attention mechanism module and skip connection module. In this way, the calculation of the high-frequency components can receive greater attention. Compared with other networks, the peak signal-to-noise ratio of the reconstructed image was increased by 1.67 dB. Extensions of the MSFE network are investigated for scene-based color mapping of the gray image. Most of the color information could be recovered, and the similarity with the real image reached 0.728. The qualitative and quantitative experimental results show that the proposed method achieved superior performance in image fidelity and detail enhancement over the state-of-the-art.
ISSN:2304-6732