Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring

In general dynamic scenes, blurring is the result of the motion of multiple objects, camera shaking or scene depth variations. As an inverse process, deblurring extracts a sharp video sequence from the information contained in one single blurry image—it is itself an ill-posed computer vision problem...

Full description

Bibliographic Details
Main Authors: Wenjia Niu, Kewen Xia, Yongke Pan
Format: Article
Language:English
Published: MDPI AG 2021-04-01
Series:Symmetry
Subjects:
Online Access:https://www.mdpi.com/2073-8994/13/4/630
_version_ 1797538285164167168
author Wenjia Niu
Kewen Xia
Yongke Pan
author_facet Wenjia Niu
Kewen Xia
Yongke Pan
author_sort Wenjia Niu
collection DOAJ
description In general dynamic scenes, blurring is the result of the motion of multiple objects, camera shaking or scene depth variations. As an inverse process, deblurring extracts a sharp video sequence from the information contained in one single blurry image—it is itself an ill-posed computer vision problem. To reconstruct these sharp frames, traditional methods aim to build several convolutional neural networks (CNN) to generate different frames, resulting in expensive computation. To vanquish this problem, an innovative framework which can generate several sharp frames based on one CNN model is proposed. The motion-based image is put into our framework and the spatio-temporal information is encoded via several convolutional and pooling layers, and the output of our model is several sharp frames. Moreover, a blurry image does not have one-to-one correspondence with any sharp video sequence, since different video sequences can create similar blurry images, so neither the traditional pixel2pixel nor perceptual loss is suitable for focusing on non-aligned data. To alleviate this problem and model the blurring process, a novel contiguous blurry loss function is proposed which focuses on measuring the loss of non-aligned data. Experimental results show that the proposed model combined with the contiguous blurry loss can generate sharp video sequences efficiently and perform better than state-of-the-art methods.
first_indexed 2024-03-10T12:28:24Z
format Article
id doaj.art-d245b431ecb04ae5bf9134afcecff9f6
institution Directory Open Access Journal
issn 2073-8994
language English
last_indexed 2024-03-10T12:28:24Z
publishDate 2021-04-01
publisher MDPI AG
record_format Article
series Symmetry
spelling doaj.art-d245b431ecb04ae5bf9134afcecff9f62023-11-21T14:52:49ZengMDPI AGSymmetry2073-89942021-04-0113463010.3390/sym13040630Contiguous Loss for Motion-Based, Non-Aligned Image DeblurringWenjia Niu0Kewen Xia1Yongke Pan2School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, ChinaSchool of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, ChinaSchool of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, ChinaIn general dynamic scenes, blurring is the result of the motion of multiple objects, camera shaking or scene depth variations. As an inverse process, deblurring extracts a sharp video sequence from the information contained in one single blurry image—it is itself an ill-posed computer vision problem. To reconstruct these sharp frames, traditional methods aim to build several convolutional neural networks (CNN) to generate different frames, resulting in expensive computation. To vanquish this problem, an innovative framework which can generate several sharp frames based on one CNN model is proposed. The motion-based image is put into our framework and the spatio-temporal information is encoded via several convolutional and pooling layers, and the output of our model is several sharp frames. Moreover, a blurry image does not have one-to-one correspondence with any sharp video sequence, since different video sequences can create similar blurry images, so neither the traditional pixel2pixel nor perceptual loss is suitable for focusing on non-aligned data. To alleviate this problem and model the blurring process, a novel contiguous blurry loss function is proposed which focuses on measuring the loss of non-aligned data. Experimental results show that the proposed model combined with the contiguous blurry loss can generate sharp video sequences efficiently and perform better than state-of-the-art methods.https://www.mdpi.com/2073-8994/13/4/630motion based imageimage deblurringconventional neural networkscontiguous blurry lossspatio-temporal framework
spellingShingle Wenjia Niu
Kewen Xia
Yongke Pan
Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
Symmetry
motion based image
image deblurring
conventional neural networks
contiguous blurry loss
spatio-temporal framework
title Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
title_full Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
title_fullStr Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
title_full_unstemmed Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
title_short Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
title_sort contiguous loss for motion based non aligned image deblurring
topic motion based image
image deblurring
conventional neural networks
contiguous blurry loss
spatio-temporal framework
url https://www.mdpi.com/2073-8994/13/4/630
work_keys_str_mv AT wenjianiu contiguouslossformotionbasednonalignedimagedeblurring
AT kewenxia contiguouslossformotionbasednonalignedimagedeblurring
AT yongkepan contiguouslossformotionbasednonalignedimagedeblurring