Image-Based View Synthesis
We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("...
Main Authors: | , , , |
---|---|
Language: | en_US |
Published: |
2004
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/7179 |
_version_ | 1826204107825741824 |
---|---|
author | Avidan, Shai Evgeniou, Theodoros Shashua, Amnon Poggio, Tomaso |
author_facet | Avidan, Shai Evgeniou, Theodoros Shashua, Amnon Poggio, Tomaso |
author_sort | Avidan, Shai |
collection | MIT |
description | We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware. |
first_indexed | 2024-09-23T12:49:01Z |
id | mit-1721.1/7179 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T12:49:01Z |
publishDate | 2004 |
record_format | dspace |
spelling | mit-1721.1/71792019-04-10T11:52:40Z Image-Based View Synthesis Avidan, Shai Evgeniou, Theodoros Shashua, Amnon Poggio, Tomaso Image Based Rendering Trilinear Tensor Multidimensional Morphing We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware. 2004-10-20T20:48:52Z 2004-10-20T20:48:52Z 1997-01-01 AIM-1603 CBCL-145 http://hdl.handle.net/1721.1/7179 en_US AIM-1603 CBCL-145 22 p. 11491930 bytes 2474047 bytes application/postscript application/pdf application/postscript application/pdf |
spellingShingle | Image Based Rendering Trilinear Tensor Multidimensional Morphing Avidan, Shai Evgeniou, Theodoros Shashua, Amnon Poggio, Tomaso Image-Based View Synthesis |
title | Image-Based View Synthesis |
title_full | Image-Based View Synthesis |
title_fullStr | Image-Based View Synthesis |
title_full_unstemmed | Image-Based View Synthesis |
title_short | Image-Based View Synthesis |
title_sort | image based view synthesis |
topic | Image Based Rendering Trilinear Tensor Multidimensional Morphing |
url | http://hdl.handle.net/1721.1/7179 |
work_keys_str_mv | AT avidanshai imagebasedviewsynthesis AT evgenioutheodoros imagebasedviewsynthesis AT shashuaamnon imagebasedviewsynthesis AT poggiotomaso imagebasedviewsynthesis |