Display Visibility Improvement Through Content and Ambient Light-Adaptive Image Enhancement

An image in a display device under strong illuminance can be perceived as darker than the original due to the nature of the human visual system (HVS). In order to alleviate this degradation in terms of software, existing schemes employ global luminance compensation or tone mapping. However, since su...

Full description

Bibliographic Details
Main Authors: Junmin Lee, Heejin Lee, Seunghyun Lee, Junho Heo, Jiwon Lee, Byung Cheol Song
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10220105/
Description
Summary:An image in a display device under strong illuminance can be perceived as darker than the original due to the nature of the human visual system (HVS). In order to alleviate this degradation in terms of software, existing schemes employ global luminance compensation or tone mapping. However, since such approaches focus on restoring luminance only, it has a fundamental drawback that chrominance cannot be sufficiently restored. Also, the previous approaches seldom provide acceptable visibility because it does not consider the content of an input image. Furthermore, because they focus mainly on global image quality, they may show unsatisfactory image quality for certain local areas. This paper introduces VisibilityNet, a neural network model designed to restore both chrominance and luminance. By leveraging VisibilityNet, we generate an optimally enhanced dataset tailored to the ambient light conditions. Furthermore, employing the generated dataset and a convolutional neural network (CNN), we estimate weighted piece-wise linear enhancement curves (WPLECs) that take into account both ambient light and image content. These WPLECs effectively enhance global contrast by addressing both luminance and chrominance aspects. Ultimately, through the utilization of a salient object detection algorithm that emulates the HVS, visibility enhancement is achieved not only for the overall region but also for visually salient areas. We verified the performance of the proposed method by comparing it with five existing approaches in terms of two quantitative metrics for a dataset we built ourselves. Experimental findings substantiate that the proposed method surpasses alternative approaches by significantly improving visibility.
ISSN:2169-3536