Neural scene de-rendering
We study the problem of holistic scene understanding. We would like to obtain a compact, expressive, and interpretable representation of scenes that encodes information such as the number of objects and their categories, poses, positions, etc. Such a representation would allow us to reason about and...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2020
|
Online Access: | https://hdl.handle.net/1721.1/126659 |
_version_ | 1826200625829904384 |
---|---|
author | Wu, Jiajun Tenenbaum, Joshua B Kohli, Pushmeet |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Wu, Jiajun Tenenbaum, Joshua B Kohli, Pushmeet |
author_sort | Wu, Jiajun |
collection | MIT |
description | We study the problem of holistic scene understanding. We would like to obtain a compact, expressive, and interpretable representation of scenes that encodes information such as the number of objects and their categories, poses, positions, etc. Such a representation would allow us to reason about and even reconstruct or manipulate elements of the scene. Previous works have used encoder-decoder based neural architectures to learn image representations; however, representations obtained in this way are typically uninterpretable, or only explain a single object in the scene. In this work, we propose a new approach to learn an interpretable distributed representation of scenes. Our approach employs a deterministic rendering function as the decoder, mapping a naturally structured and disentangled scene description, which we named scene XML, to an image. By doing so, the encoder is forced to perform the inverse of the rendering operation (a.k.a. de-rendering) to transform an input image to the structured scene XML that the decoder used to produce the image. We use a object proposal based encoder that is trained by minimizing both the supervised prediction and the unsupervised reconstruction errors. Experiments demonstrate that our approach works well on scene de-rendering with two different graphics engines, and our learned representation can be easily adapted for a wide range of applications like image editing, inpainting, visual analogy-making, and image captioning. |
first_indexed | 2024-09-23T11:39:21Z |
format | Article |
id | mit-1721.1/126659 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T11:39:21Z |
publishDate | 2020 |
publisher | Institute of Electrical and Electronics Engineers (IEEE) |
record_format | dspace |
spelling | mit-1721.1/1266592022-10-01T05:04:49Z Neural scene de-rendering Wu, Jiajun Tenenbaum, Joshua B Kohli, Pushmeet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory We study the problem of holistic scene understanding. We would like to obtain a compact, expressive, and interpretable representation of scenes that encodes information such as the number of objects and their categories, poses, positions, etc. Such a representation would allow us to reason about and even reconstruct or manipulate elements of the scene. Previous works have used encoder-decoder based neural architectures to learn image representations; however, representations obtained in this way are typically uninterpretable, or only explain a single object in the scene. In this work, we propose a new approach to learn an interpretable distributed representation of scenes. Our approach employs a deterministic rendering function as the decoder, mapping a naturally structured and disentangled scene description, which we named scene XML, to an image. By doing so, the encoder is forced to perform the inverse of the rendering operation (a.k.a. de-rendering) to transform an input image to the structured scene XML that the decoder used to produce the image. We use a object proposal based encoder that is trained by minimizing both the supervised prediction and the unsupervised reconstruction errors. Experiments demonstrate that our approach works well on scene de-rendering with two different graphics engines, and our learned representation can be easily adapted for a wide range of applications like image editing, inpainting, visual analogy-making, and image captioning. 2020-08-18T20:41:05Z 2020-08-18T20:41:05Z 2017 2019-10-08T14:15:44Z Article http://purl.org/eprint/type/ConferencePaper 978-1-5386-0457-1 https://hdl.handle.net/1721.1/126659 Wu, Jiajun, Joshua B. Tenenbaum, and Pushmeet Kohli. "Neural scene de-rendering." 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, Hawaii: 7035-43 doi 10.1109/CVPR.2017.744 ©2017 Author(s) en 10.1109/CVPR.2017.744 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) MIT web domain |
spellingShingle | Wu, Jiajun Tenenbaum, Joshua B Kohli, Pushmeet Neural scene de-rendering |
title | Neural scene de-rendering |
title_full | Neural scene de-rendering |
title_fullStr | Neural scene de-rendering |
title_full_unstemmed | Neural scene de-rendering |
title_short | Neural scene de-rendering |
title_sort | neural scene de rendering |
url | https://hdl.handle.net/1721.1/126659 |
work_keys_str_mv | AT wujiajun neuralscenederendering AT tenenbaumjoshuab neuralscenederendering AT kohlipushmeet neuralscenederendering |