Deep HDR Deghosting by Motion-Attention Fusion Network
Multi-exposure image fusion (MEF) methods for high dynamic range (HDR) imaging suffer from ghosting artifacts when dealing with moving objects in dynamic scenes. The state-of-the-art methods use optical flow to align low dynamic range (LDR) images before merging, introducing distortion into the alig...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-10-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/20/7853 |
_version_ | 1797470023148634112 |
---|---|
author | Yifan Xiao Peter Veelaert Wilfried Philips |
author_facet | Yifan Xiao Peter Veelaert Wilfried Philips |
author_sort | Yifan Xiao |
collection | DOAJ |
description | Multi-exposure image fusion (MEF) methods for high dynamic range (HDR) imaging suffer from ghosting artifacts when dealing with moving objects in dynamic scenes. The state-of-the-art methods use optical flow to align low dynamic range (LDR) images before merging, introducing distortion into the aligned LDR images from inaccurate motion estimation due to large motion and occlusion. In place of pre-alignment, attention-based methods calculate the correlation between the reference LDR image and non-reference LDR images, thus excluding misaligned regions in LDR images. Nevertheless, they also exclude the saturated details at the same time. Taking advantage of both the alignment and attention-based methods, we propose an efficient Deep HDR Deghosting Fusion Network (DDFNet) guided by optical flow and image correlation attentions. Specifically, the DDFNet estimates the optical flow of the LDR images by a motion estimation module and encodes that optical flow as a flow feature. Additionally, it extracts correlation features between the reference LDR and other non-reference LDR images. The optical flow and correlation features are employed to adaptably combine information from LDR inputs in an attention-based fusion module. Following the merging of features, a decoder composed of Dense Networks reconstructs the HDR image without ghosting. Experimental results indicate that the proposed DDFNet achieves state-of-the-art image fusion performance on different public datasets. |
first_indexed | 2024-03-09T19:30:52Z |
format | Article |
id | doaj.art-9bc0af30e2674174a1efb6d551fa3b8c |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T19:30:52Z |
publishDate | 2022-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-9bc0af30e2674174a1efb6d551fa3b8c2023-11-24T02:27:17ZengMDPI AGSensors1424-82202022-10-012220785310.3390/s22207853Deep HDR Deghosting by Motion-Attention Fusion NetworkYifan Xiao0Peter Veelaert1Wilfried Philips2Department of Telecommunications and Information Processing, IPI-IMEC, Ghent University, 9000 Ghent, BelgiumDepartment of Telecommunications and Information Processing, IPI-IMEC, Ghent University, 9000 Ghent, BelgiumDepartment of Telecommunications and Information Processing, IPI-IMEC, Ghent University, 9000 Ghent, BelgiumMulti-exposure image fusion (MEF) methods for high dynamic range (HDR) imaging suffer from ghosting artifacts when dealing with moving objects in dynamic scenes. The state-of-the-art methods use optical flow to align low dynamic range (LDR) images before merging, introducing distortion into the aligned LDR images from inaccurate motion estimation due to large motion and occlusion. In place of pre-alignment, attention-based methods calculate the correlation between the reference LDR image and non-reference LDR images, thus excluding misaligned regions in LDR images. Nevertheless, they also exclude the saturated details at the same time. Taking advantage of both the alignment and attention-based methods, we propose an efficient Deep HDR Deghosting Fusion Network (DDFNet) guided by optical flow and image correlation attentions. Specifically, the DDFNet estimates the optical flow of the LDR images by a motion estimation module and encodes that optical flow as a flow feature. Additionally, it extracts correlation features between the reference LDR and other non-reference LDR images. The optical flow and correlation features are employed to adaptably combine information from LDR inputs in an attention-based fusion module. Following the merging of features, a decoder composed of Dense Networks reconstructs the HDR image without ghosting. Experimental results indicate that the proposed DDFNet achieves state-of-the-art image fusion performance on different public datasets.https://www.mdpi.com/1424-8220/22/20/7853high dynamic range imagingimage fusionconvolutional neural networkattention module |
spellingShingle | Yifan Xiao Peter Veelaert Wilfried Philips Deep HDR Deghosting by Motion-Attention Fusion Network Sensors high dynamic range imaging image fusion convolutional neural network attention module |
title | Deep HDR Deghosting by Motion-Attention Fusion Network |
title_full | Deep HDR Deghosting by Motion-Attention Fusion Network |
title_fullStr | Deep HDR Deghosting by Motion-Attention Fusion Network |
title_full_unstemmed | Deep HDR Deghosting by Motion-Attention Fusion Network |
title_short | Deep HDR Deghosting by Motion-Attention Fusion Network |
title_sort | deep hdr deghosting by motion attention fusion network |
topic | high dynamic range imaging image fusion convolutional neural network attention module |
url | https://www.mdpi.com/1424-8220/22/20/7853 |
work_keys_str_mv | AT yifanxiao deephdrdeghostingbymotionattentionfusionnetwork AT peterveelaert deephdrdeghostingbymotionattentionfusionnetwork AT wilfriedphilips deephdrdeghostingbymotionattentionfusionnetwork |