Learning to reconstruct shapes from unseen classes

© 2018 Curran Associates Inc.All rights reserved. From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end u...

Full description

Bibliographic Details
Main Authors: Zhang, Xiuming, Zhang, Zhoutong, Zhang, Chengkai, Tenenbaum, Joshua B., Freeman, William T., Wu, Jiajun
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/137406
_version_ 1826188868627464192
author Zhang, Xiuming
Zhang, Zhoutong
Zhang, Chengkai
Tenenbaum, Joshua B.
Freeman, William T.
Wu, Jiajun
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Zhang, Xiuming
Zhang, Zhoutong
Zhang, Chengkai
Tenenbaum, Joshua B.
Freeman, William T.
Wu, Jiajun
author_sort Zhang, Xiuming
collection MIT
description © 2018 Curran Associates Inc.All rights reserved. From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.
first_indexed 2024-09-23T08:06:16Z
format Article
id mit-1721.1/137406
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T08:06:16Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1374062022-09-30T07:33:00Z Learning to reconstruct shapes from unseen classes Zhang, Xiuming Zhang, Zhoutong Zhang, Chengkai Tenenbaum, Joshua B. Freeman, William T. Wu, Jiajun Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory © 2018 Curran Associates Inc.All rights reserved. From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training. 2021-11-04T19:32:26Z 2021-11-04T19:32:26Z 2018 2019-05-28T12:34:12Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137406 Zhang, Xiuming, Zhang, Zhoutong, Zhang, Chengkai, Tenenbaum, Joshua B., Freeman, William T. et al. 2018. "Learning to reconstruct shapes from unseen classes." en https://papers.nips.cc/paper/7494-learning-to-reconstruct-shapes-from-unseen-classes Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems (NIPS)
spellingShingle Zhang, Xiuming
Zhang, Zhoutong
Zhang, Chengkai
Tenenbaum, Joshua B.
Freeman, William T.
Wu, Jiajun
Learning to reconstruct shapes from unseen classes
title Learning to reconstruct shapes from unseen classes
title_full Learning to reconstruct shapes from unseen classes
title_fullStr Learning to reconstruct shapes from unseen classes
title_full_unstemmed Learning to reconstruct shapes from unseen classes
title_short Learning to reconstruct shapes from unseen classes
title_sort learning to reconstruct shapes from unseen classes
url https://hdl.handle.net/1721.1/137406
work_keys_str_mv AT zhangxiuming learningtoreconstructshapesfromunseenclasses
AT zhangzhoutong learningtoreconstructshapesfromunseenclasses
AT zhangchengkai learningtoreconstructshapesfromunseenclasses
AT tenenbaumjoshuab learningtoreconstructshapesfromunseenclasses
AT freemanwilliamt learningtoreconstructshapesfromunseenclasses
AT wujiajun learningtoreconstructshapesfromunseenclasses