Learning to Act Properly: Predicting and Explaining Affordances from Images
We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
IEEE
2020
|
Online Access: | https://hdl.handle.net/1721.1/123477 |
_version_ | 1826198088079900672 |
---|---|
author | Chuang, Ching-Yao Li, Jiaman Torralba, Antonio Fidler, Sanja |
author2 | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
author_facet | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Chuang, Ching-Yao Li, Jiaman Torralba, Antonio Fidler, Sanja |
author_sort | Chuang, Ching-Yao |
collection | MIT |
description | We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society. We also aim to teach artificial agents why some actions should not be taken in certain situations, and what would likely happen if these actions would be taken. We collect a new dataset that builds upon ADE20k [32], referred to as ADE-Affordance, which contains annotations enabling such rich visual reasoning. We propose a model that exploits Graph Neural Networks to propagate contextual information from the scene in order to perform detailed affordance reasoning about each object. Our model is showcased through various ablation studies, pointing to successes and challenges in this complex task. Keywords: cognition; visualization; neural networks; knowledge based systems; task analysis; data collection; robots |
first_indexed | 2024-09-23T10:58:39Z |
format | Article |
id | mit-1721.1/123477 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T10:58:39Z |
publishDate | 2020 |
publisher | IEEE |
record_format | dspace |
spelling | mit-1721.1/1234772022-10-01T00:21:14Z Learning to Act Properly: Predicting and Explaining Affordances from Images Chuang, Ching-Yao Li, Jiaman Torralba, Antonio Fidler, Sanja Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society. We also aim to teach artificial agents why some actions should not be taken in certain situations, and what would likely happen if these actions would be taken. We collect a new dataset that builds upon ADE20k [32], referred to as ADE-Affordance, which contains annotations enabling such rich visual reasoning. We propose a model that exploits Graph Neural Networks to propagate contextual information from the scene in order to perform detailed affordance reasoning about each object. Our model is showcased through various ablation studies, pointing to successes and challenges in this complex task. Keywords: cognition; visualization; neural networks; knowledge based systems; task analysis; data collection; robots 2020-01-20T18:19:48Z 2020-01-20T18:19:48Z 2018-12-17 2018-06-15 2019-07-11T17:14:04Z Article http://purl.org/eprint/type/ConferencePaper 9781538664209 9781538664216 2575-7075 1063-6919 https://hdl.handle.net/1721.1/123477 Chuang, Ching-Yao et al. "Learning to Act Properly: Predicting and Explaining Affordances from Images." 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, Utah, USA, IEEE, 2018 en http://dx.doi.org/10.1109/cvpr.2018.00108 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf IEEE arXiv |
spellingShingle | Chuang, Ching-Yao Li, Jiaman Torralba, Antonio Fidler, Sanja Learning to Act Properly: Predicting and Explaining Affordances from Images |
title | Learning to Act Properly: Predicting and Explaining Affordances from Images |
title_full | Learning to Act Properly: Predicting and Explaining Affordances from Images |
title_fullStr | Learning to Act Properly: Predicting and Explaining Affordances from Images |
title_full_unstemmed | Learning to Act Properly: Predicting and Explaining Affordances from Images |
title_short | Learning to Act Properly: Predicting and Explaining Affordances from Images |
title_sort | learning to act properly predicting and explaining affordances from images |
url | https://hdl.handle.net/1721.1/123477 |
work_keys_str_mv | AT chuangchingyao learningtoactproperlypredictingandexplainingaffordancesfromimages AT lijiaman learningtoactproperlypredictingandexplainingaffordancesfromimages AT torralbaantonio learningtoactproperlypredictingandexplainingaffordancesfromimages AT fidlersanja learningtoactproperlypredictingandexplainingaffordancesfromimages |