Near-invariant blur for depth and 2D motion via time-varying light field analysis

Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far...

Full description

Bibliographic Details
Main Authors: Bando, Yosuke, Raskar, Ramesh, Holtzman, Henry N.
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery (ACM) 2013
Online Access:http://hdl.handle.net/1721.1/79901
https://orcid.org/0000-0002-9303-3658
https://orcid.org/0000-0002-3254-3224
_version_ 1826208158999117824
author Bando, Yosuke
Raskar, Ramesh
Holtzman, Henry N.
author2 Massachusetts Institute of Technology. Media Laboratory
author_facet Massachusetts Institute of Technology. Media Laboratory
Bando, Yosuke
Raskar, Ramesh
Holtzman, Henry N.
author_sort Bando, Yosuke
collection MIT
description Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far, the techniques have been studied separately for defocus and motion blur, and object motion has been assumed 1D (e.g., horizontal). This article explores a more general capture method that makes both defocus blur and motion blur nearly invariant to scene depth and in-plane 2D object motion. We formulate the problem as capturing a time-varying light field through a time-varying light field modulator at the lens aperture, and perform 5D (4D light field + 1D time) analysis of all the existing computational cameras for defocus/motion-only deblurring and their hybrids. This leads to a surprising conclusion that focus sweep, previously known as a depth-invariant capture method that moves the plane of focus through a range of scene depth during exposure, is near-optimal both in terms of depth and 2D motion invariance and in terms of high-frequency preservation for certain combinations of depth and motion ranges. Using our prototype camera, we demonstrate joint defocus and motion deblurring for moving scenes with depth variation.
first_indexed 2024-09-23T14:01:27Z
format Article
id mit-1721.1/79901
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T14:01:27Z
publishDate 2013
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/799012022-09-28T17:46:44Z Near-invariant blur for depth and 2D motion via time-varying light field analysis Bando, Yosuke Raskar, Ramesh Holtzman, Henry N. Massachusetts Institute of Technology. Media Laboratory Program in Media Arts and Sciences (Massachusetts Institute of Technology) Bando, Yosuke Holtzman, Henry N. Raskar, Ramesh Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far, the techniques have been studied separately for defocus and motion blur, and object motion has been assumed 1D (e.g., horizontal). This article explores a more general capture method that makes both defocus blur and motion blur nearly invariant to scene depth and in-plane 2D object motion. We formulate the problem as capturing a time-varying light field through a time-varying light field modulator at the lens aperture, and perform 5D (4D light field + 1D time) analysis of all the existing computational cameras for defocus/motion-only deblurring and their hybrids. This leads to a surprising conclusion that focus sweep, previously known as a depth-invariant capture method that moves the plane of focus through a range of scene depth during exposure, is near-optimal both in terms of depth and 2D motion invariance and in terms of high-frequency preservation for certain combinations of depth and motion ranges. Using our prototype camera, we demonstrate joint defocus and motion deblurring for moving scenes with depth variation. 2013-08-21T18:41:54Z 2013-08-21T18:41:54Z 2013-04 2012-08 Article http://purl.org/eprint/type/ConferencePaper 07300301 http://hdl.handle.net/1721.1/79901 Yosuke Bando, Henry Holtzman, and Ramesh Raskar. 2013. Near-invariant blur for depth and 2D motion via time-varying light field analysis. ACM Trans. Graph. 32, 2, Article 13 (April 2013), 15 pages. https://orcid.org/0000-0002-9303-3658 https://orcid.org/0000-0002-3254-3224 en_US http://dx.doi.org/10.1145/2451236.2451239 ACM Transactions on Graphics Creative Commons Attribution-Noncommercial-Share Alike 3.0 http://creativecommons.org/licenses/by-nc-sa/3.0/ application/pdf Association for Computing Machinery (ACM) MIT Web Domain
spellingShingle Bando, Yosuke
Raskar, Ramesh
Holtzman, Henry N.
Near-invariant blur for depth and 2D motion via time-varying light field analysis
title Near-invariant blur for depth and 2D motion via time-varying light field analysis
title_full Near-invariant blur for depth and 2D motion via time-varying light field analysis
title_fullStr Near-invariant blur for depth and 2D motion via time-varying light field analysis
title_full_unstemmed Near-invariant blur for depth and 2D motion via time-varying light field analysis
title_short Near-invariant blur for depth and 2D motion via time-varying light field analysis
title_sort near invariant blur for depth and 2d motion via time varying light field analysis
url http://hdl.handle.net/1721.1/79901
https://orcid.org/0000-0002-9303-3658
https://orcid.org/0000-0002-3254-3224
work_keys_str_mv AT bandoyosuke nearinvariantblurfordepthand2dmotionviatimevaryinglightfieldanalysis
AT raskarramesh nearinvariantblurfordepthand2dmotionviatimevaryinglightfieldanalysis
AT holtzmanhenryn nearinvariantblurfordepthand2dmotionviatimevaryinglightfieldanalysis