EFCANet: Exposure Fusion Cross-Attention Network for Low-Light Image Enhancement

Image capture devices capture poor-quality images under low-light conditions, and the resulting images have dark areas due to insufficient exposure. Traditional Multiple Exposure Fusion (MEF) methods fuse images with different exposure levels from a global perspective, which often leads to secondary...

Full description

Bibliographic Details
Main Authors: Zhe Yang, Fangjin Liu, Jinjiang Li
Format: Article
Language:English
Published: MDPI AG 2022-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/13/1/380
Description
Summary:Image capture devices capture poor-quality images under low-light conditions, and the resulting images have dark areas due to insufficient exposure. Traditional Multiple Exposure Fusion (MEF) methods fuse images with different exposure levels from a global perspective, which often leads to secondary exposure in well-exposed areas of the original image. At the same time, the image sequences with different exposure levels are not sufficient, and the MEF method is limited by the training data and benchmark labels. To address the above problems, this paper proposes an exposure fusion cross-attention network based low-light image enhancement (EFCANet). EFCANet is characterized by recovering normal light images from a single exposure-corrected image. First, the Exposure Image Generator (EIG) is used to estimate the single exposure-corrected image corresponding to the original input image. Then, the color space of the exposure-corrected image and the original input image are converted from RGB to YCbCr, aiming to maintain the balance of brightness and color. Finally, a Cross-Attention Fusion Module (CAFM) is used to fuse the images on the YCbCr color space to achieve image enhancement. We use a single CAFM as a recursive unit, and EFCANet progressively uses four recursive units. The intermediate enhancement results generated by the first recursive unit and the exposure-corrected image of the original input image in YCbCr color space are used as inputs for the second recursive unit. We conducted comparison experiments with 14 state-of-the-art methods on eight publicly available datasets. The experimental results demonstrate that the image quality of EFCANet enhancement is better than other methods.
ISSN:2076-3417