Near-Infrared Image Colorization Using Asymmetric Codec and Pixel-Level Fusion

This paper mainly studies the colorization of near-infrared (NIR) images. Image colorization methods cannot be extended to NIR image colorization since the wavelength band of the NIR image exceeds the visible light spectral range and it is often linearly independent of the luminance of the RGB image...

Full description

Bibliographic Details
Main Authors: Xiaoyu Ma, Wei Huang, Rui Huang, Xuefeng Liu
Format: Article
Language:English
Published: MDPI AG 2022-10-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/19/10087
Description
Summary:This paper mainly studies the colorization of near-infrared (NIR) images. Image colorization methods cannot be extended to NIR image colorization since the wavelength band of the NIR image exceeds the visible light spectral range and it is often linearly independent of the luminance of the RGB image. Furthermore, a symmetric codec, which cannot guarantee the ability of the encoder to extract features, is often used as the main frame of the network in both CNN-based colorization networks and CycleGAN-based colorization networks. In order to deal with the investigated problem, we propose a novel NIR colorization method using asymmetric codec (ACD) and pixel-level fusion. ACD is designed to improve the feature extraction ability of the encoder by allowing the information to enter deeper into the model and learning more non-redundant information. In addition, the global and local feature fusion networks (GLFFNet) are embedded between the encoder and the decoder to improve the prediction of the subtle color information of the image. The ACD and GLFFNet together constitute the colorization network (ColorNet) in this paper. Bilateral filtering and weighted least squares filtering (BFWLS) are used to fuse the pixel-level information of the input NIR image into the raw output image of the ColorNet. Finally, an intensive comparison analysis based on common datasets is conducted to verify superiority over existing methods in qualitative and quantitative visual assessments.
ISSN:2076-3417