Learning affordances in object-centric generative models
Given visual observations of a reaching task together with a stick-like tool, we propose a novel approach that learns to exploit task-relevant object affordances by combining generative modelling with a task-based performance predictor. The embedding learned by the generative model captures the fact...
主要な著者: | , , , , , , |
---|---|
フォーマット: | Conference item |
言語: | English |
出版事項: |
International Conference on Machine Learning
2020
|