Near-invariant blur for depth and 2D motion via time-varying light field analysis
Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far...
Main Authors: | Bando, Yosuke, Raskar, Ramesh, Holtzman, Henry N. |
---|---|
Other Authors: | Massachusetts Institute of Technology. Media Laboratory |
Format: | Article |
Language: | en_US |
Published: |
Association for Computing Machinery (ACM)
2013
|
Online Access: | http://hdl.handle.net/1721.1/79901 https://orcid.org/0000-0002-9303-3658 https://orcid.org/0000-0002-3254-3224 |
Similar Items
-
Compressive light field photography using overcomplete dictionaries and optimized projections
by: Marwah, Kshitij, et al.
Published: (2014) -
BiDi screen: a thin, depth-sensing LCD for 3D interaction using light fields
by: Hirsch, Matthew Waggener, et al.
Published: (2011) -
Analyzing spatially-varying blur
by: Chakrabarti, Ayan, et al.
Published: (2012) -
Highlighted depth-of-field photography: Shining light on focus
by: Kim, Jaewon, et al.
Published: (2011) -
Motion blur removal from photographs
by: Cho, Taeg Sang
Published: (2011)