Shape Anchors for Data-Driven Multi-view Reconstruction

We present a data-driven method for building dense 3D reconstructions using a combination of recognition and multi-view cues. Our approach is based on the idea that there are image patches that are so distinctive that we can accurately estimate their latent 3D shapes solely using recognition. We cal...

Full description

Bibliographic Details
Main Authors: Xiao, Jianxiong, Torralba, Antonio, Owens, Andrew Hale, Freeman, William T.
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2014
Online Access:http://hdl.handle.net/1721.1/91001
https://orcid.org/0000-0001-9020-9593
https://orcid.org/0000-0002-2231-7995
https://orcid.org/0000-0003-4915-0256
_version_ 1826213559341678592
author Xiao, Jianxiong
Torralba, Antonio
Owens, Andrew Hale
Freeman, William T.
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Xiao, Jianxiong
Torralba, Antonio
Owens, Andrew Hale
Freeman, William T.
author_sort Xiao, Jianxiong
collection MIT
description We present a data-driven method for building dense 3D reconstructions using a combination of recognition and multi-view cues. Our approach is based on the idea that there are image patches that are so distinctive that we can accurately estimate their latent 3D shapes solely using recognition. We call these patches shape anchors, and we use them as the basis of a multi-view reconstruction system that transfers dense, complex geometry between scenes. We "anchor" our 3D interpretation from these patches, using them to predict geometry for parts of the scene that are relatively ambiguous. The resulting algorithm produces dense reconstructions from stereo point clouds that are sparse and noisy, and we demonstrate it on a challenging dataset of real-world, indoor scenes.
first_indexed 2024-09-23T15:51:10Z
format Article
id mit-1721.1/91001
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T15:51:10Z
publishDate 2014
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/910012022-10-02T04:32:52Z Shape Anchors for Data-Driven Multi-view Reconstruction Xiao, Jianxiong Torralba, Antonio Owens, Andrew Hale Freeman, William T. Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Owens, Andrew Hale Torralba, Antonio Freeman, William T. We present a data-driven method for building dense 3D reconstructions using a combination of recognition and multi-view cues. Our approach is based on the idea that there are image patches that are so distinctive that we can accurately estimate their latent 3D shapes solely using recognition. We call these patches shape anchors, and we use them as the basis of a multi-view reconstruction system that transfers dense, complex geometry between scenes. We "anchor" our 3D interpretation from these patches, using them to predict geometry for parts of the scene that are relatively ambiguous. The resulting algorithm produces dense reconstructions from stereo point clouds that are sparse and noisy, and we demonstrate it on a challenging dataset of real-world, indoor scenes. American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowship United States. Office of Naval Research. Multidisciplinary University Research Initiative (N000141010933) National Science Foundation (U.S.) (Grant CGV-1212928) 2014-10-20T18:14:37Z 2014-10-20T18:14:37Z 2013-12 Article http://purl.org/eprint/type/ConferencePaper 978-1-4799-2840-8 1550-5499 http://hdl.handle.net/1721.1/91001 Owens, Andrew, Jianxiong Xiao, Antonio Torralba, and William Freeman. “Shape Anchors for Data-Driven Multi-View Reconstruction.” 2013 IEEE International Conference on Computer Vision (December 2013). https://orcid.org/0000-0001-9020-9593 https://orcid.org/0000-0002-2231-7995 https://orcid.org/0000-0003-4915-0256 en_US http://dx.doi.org/10.1109/ICCV.2013.461 Proceedings of the 2013 IEEE International Conference on Computer Vision Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) MIT web domain
spellingShingle Xiao, Jianxiong
Torralba, Antonio
Owens, Andrew Hale
Freeman, William T.
Shape Anchors for Data-Driven Multi-view Reconstruction
title Shape Anchors for Data-Driven Multi-view Reconstruction
title_full Shape Anchors for Data-Driven Multi-view Reconstruction
title_fullStr Shape Anchors for Data-Driven Multi-view Reconstruction
title_full_unstemmed Shape Anchors for Data-Driven Multi-view Reconstruction
title_short Shape Anchors for Data-Driven Multi-view Reconstruction
title_sort shape anchors for data driven multi view reconstruction
url http://hdl.handle.net/1721.1/91001
https://orcid.org/0000-0001-9020-9593
https://orcid.org/0000-0002-2231-7995
https://orcid.org/0000-0003-4915-0256
work_keys_str_mv AT xiaojianxiong shapeanchorsfordatadrivenmultiviewreconstruction
AT torralbaantonio shapeanchorsfordatadrivenmultiviewreconstruction
AT owensandrewhale shapeanchorsfordatadrivenmultiviewreconstruction
AT freemanwilliamt shapeanchorsfordatadrivenmultiviewreconstruction