3D Interpreter Networks for Viewer-Centered Wireframe Modeling
Understanding 3D object structure from a single image is an important but challenging task in computer vision, mostly due to the lack of 3D object annotations to real images. Previous research tackled this problem by either searching for a 3D shape that best explains 2D annotations, or training pure...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer Nature
2020
|
Online Access: | https://hdl.handle.net/1721.1/124536 |
_version_ | 1826195774546903040 |
---|---|
author | Wu, Jiajun Tenenbaum, Joshua B Torralba, Antonio Freeman, William T |
author2 | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences |
author_facet | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences Wu, Jiajun Tenenbaum, Joshua B Torralba, Antonio Freeman, William T |
author_sort | Wu, Jiajun |
collection | MIT |
description | Understanding 3D object structure from a single image is an important but challenging task in computer vision, mostly due to the lack of 3D object annotations to real images. Previous research tackled this problem by either searching for a 3D shape that best explains 2D annotations, or training purely on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Networks (3D-INN), an end-to-end trainable framework that sequentially estimates 2D keypoint heatmaps and 3D object skeletons and poses. Our system learns from both 2D-annotated real images and synthetic 3D data. This is made possible mainly by two technical innovations. First, heatmaps of 2D keypoints serve as an intermediate representation to connect real and synthetic data. 3D-INN is trained on real images to estimate 2D keypoint heatmaps from an input image; it then predicts 3D object structure from heatmaps using knowledge learned from synthetic 3D shapes. By doing so, 3D-INN benefits from the variation and abundance of synthetic 3D objects, without suffering from the domain difference between real and synthesized images, often due to imperfect rendering. Second, we propose a Projection Layer, mapping estimated 3D structure back to 2D. During training, it ensures 3D-INN to predict 3D structure whose projection is consistent with the 2D annotations to real images. Experiments show that the proposed system performs well on both 2D keypoint estimation and 3D structure recovery. We also demonstrate that the recovered 3D information has wide vision applications, such as image retrieval. ©2018 |
first_indexed | 2024-09-23T10:15:13Z |
format | Article |
id | mit-1721.1/124536 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T10:15:13Z |
publishDate | 2020 |
publisher | Springer Nature |
record_format | dspace |
spelling | mit-1721.1/1245362022-09-30T19:57:13Z 3D Interpreter Networks for Viewer-Centered Wireframe Modeling Wu, Jiajun Tenenbaum, Joshua B Torralba, Antonio Freeman, William T Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Understanding 3D object structure from a single image is an important but challenging task in computer vision, mostly due to the lack of 3D object annotations to real images. Previous research tackled this problem by either searching for a 3D shape that best explains 2D annotations, or training purely on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Networks (3D-INN), an end-to-end trainable framework that sequentially estimates 2D keypoint heatmaps and 3D object skeletons and poses. Our system learns from both 2D-annotated real images and synthetic 3D data. This is made possible mainly by two technical innovations. First, heatmaps of 2D keypoints serve as an intermediate representation to connect real and synthetic data. 3D-INN is trained on real images to estimate 2D keypoint heatmaps from an input image; it then predicts 3D object structure from heatmaps using knowledge learned from synthetic 3D shapes. By doing so, 3D-INN benefits from the variation and abundance of synthetic 3D objects, without suffering from the domain difference between real and synthesized images, often due to imperfect rendering. Second, we propose a Projection Layer, mapping estimated 3D structure back to 2D. During training, it ensures 3D-INN to predict 3D structure whose projection is consistent with the 2D annotations to real images. Experiments show that the proposed system performs well on both 2D keypoint estimation and 3D structure recovery. We also demonstrate that the recovered 3D information has wide vision applications, such as image retrieval. ©2018 NSF Robust Intelligence (grant: 1212849) NSF Big Data (grant 1447476) NSF Robust Intelligence (grant 1524817) ONR MURI (grant N00014-16-1-2007) 2020-04-08T15:54:06Z 2020-04-08T15:54:06Z 2018-09 2017-06 2019-05-28T14:48:45Z Article http://purl.org/eprint/type/JournalArticle 1573-1405 https://hdl.handle.net/1721.1/124536 Wu, Jiajun, et al., "3D Interpreter Networks for Viewer-Centered Wireframe Modeling." International Journal of Computer Vision 126 (2018): p. 1009-26 doi 10.1007/S11263-018-1074-6 ©2018 Author(s) en 10.1007/S11263-018-1074-6 International Journal of Computer Vision Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Springer Nature arXiv |
spellingShingle | Wu, Jiajun Tenenbaum, Joshua B Torralba, Antonio Freeman, William T 3D Interpreter Networks for Viewer-Centered Wireframe Modeling |
title | 3D Interpreter Networks for Viewer-Centered Wireframe Modeling |
title_full | 3D Interpreter Networks for Viewer-Centered Wireframe Modeling |
title_fullStr | 3D Interpreter Networks for Viewer-Centered Wireframe Modeling |
title_full_unstemmed | 3D Interpreter Networks for Viewer-Centered Wireframe Modeling |
title_short | 3D Interpreter Networks for Viewer-Centered Wireframe Modeling |
title_sort | 3d interpreter networks for viewer centered wireframe modeling |
url | https://hdl.handle.net/1721.1/124536 |
work_keys_str_mv | AT wujiajun 3dinterpreternetworksforviewercenteredwireframemodeling AT tenenbaumjoshuab 3dinterpreternetworksforviewercenteredwireframemodeling AT torralbaantonio 3dinterpreternetworksforviewercenteredwireframemodeling AT freemanwilliamt 3dinterpreternetworksforviewercenteredwireframemodeling |