A Novel Patch-Based Multi-Exposure Image Fusion Using Super-Pixel Segmentation

A novel multi-exposure image fusion method is proposed for solving the problems of color distortion and detail loss through adaptive image patch segmentation. First, we use the super-pixel segmentation approach to divide the input images into the non-overlapping image patches composed of pixels with...

Full description

Bibliographic Details
Main Authors: Shupeng Wang, Yao Zhao
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9007433/
Description
Summary:A novel multi-exposure image fusion method is proposed for solving the problems of color distortion and detail loss through adaptive image patch segmentation. First, we use the super-pixel segmentation approach to divide the input images into the non-overlapping image patches composed of pixels with similar visual properties. Then, the image patches are decomposed into three independent components: signal strength, image structure and intensity. The three components are fused separately based on characteristics of human vision system and exposure level of input image. While, guided filtering is used to remove the blocking artifacts caused by patch-wise processing. In contrast to the existing methods which use fixed-size patches, the proposed method avoids blocking effect and preserves the color attribute of the input images. The experimental results show that the proposed method has advantages both in subjective and objective evaluation over the state-of-the-art multi-exposure fusion methods.
ISSN:2169-3536