Learning affordances in object-centric generative models
Given visual observations of a reaching task together with a stick-like tool, we propose a novel approach that learns to exploit task-relevant object affordances by combining generative modelling with a task-based performance predictor. The embedding learned by the generative model captures the fact...
Main Authors: | , , , , , , |
---|---|
פורמט: | Conference item |
שפה: | English |
יצא לאור: |
International Conference on Machine Learning
2020
|
_version_ | 1826256444037529600 |
---|---|
author | Wu, Y Kasewa, S Groth, O Salter, S Sun, L Parker Jones, O Posner, H |
author_facet | Wu, Y Kasewa, S Groth, O Salter, S Sun, L Parker Jones, O Posner, H |
author_sort | Wu, Y |
collection | OXFORD |
description | Given visual observations of a reaching task together with a stick-like tool, we propose a novel approach that learns to exploit task-relevant object affordances by combining generative modelling with a task-based performance predictor. The embedding learned by the generative model captures the factors of variation in object geometry, e.g. length, width, and configuration. The performance predictor identifies sub-manifolds correlated with task success in a weakly supervised manner. Using a 3D simulation environment, we demonstrate that traversing the latent space in this task-driven way results in appropriate tool geometries for the task at hand. Our results suggest that affordances are encoded along smooth trajectories in the learned latent space. Given only high-level performance criteria (such as task success), accessing these emergent affordances via gradient descent enables the agent to manipulate learned object geometries in a targeted and deliberate way.
|
first_indexed | 2024-03-06T18:02:22Z |
format | Conference item |
id | oxford-uuid:003cbbd9-a3aa-42e7-8e2d-bcc6b22db89a |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-06T18:02:22Z |
publishDate | 2020 |
publisher | International Conference on Machine Learning |
record_format | dspace |
spelling | oxford-uuid:003cbbd9-a3aa-42e7-8e2d-bcc6b22db89a2022-03-26T08:28:29ZLearning affordances in object-centric generative modelsConference itemhttp://purl.org/coar/resource_type/c_6670uuid:003cbbd9-a3aa-42e7-8e2d-bcc6b22db89aEnglishSymplectic ElementsInternational Conference on Machine Learning2020Wu, YKasewa, SGroth, OSalter, SSun, LParker Jones, OPosner, HGiven visual observations of a reaching task together with a stick-like tool, we propose a novel approach that learns to exploit task-relevant object affordances by combining generative modelling with a task-based performance predictor. The embedding learned by the generative model captures the factors of variation in object geometry, e.g. length, width, and configuration. The performance predictor identifies sub-manifolds correlated with task success in a weakly supervised manner. Using a 3D simulation environment, we demonstrate that traversing the latent space in this task-driven way results in appropriate tool geometries for the task at hand. Our results suggest that affordances are encoded along smooth trajectories in the learned latent space. Given only high-level performance criteria (such as task success), accessing these emergent affordances via gradient descent enables the agent to manipulate learned object geometries in a targeted and deliberate way. |
spellingShingle | Wu, Y Kasewa, S Groth, O Salter, S Sun, L Parker Jones, O Posner, H Learning affordances in object-centric generative models |
title | Learning affordances in object-centric generative models |
title_full | Learning affordances in object-centric generative models |
title_fullStr | Learning affordances in object-centric generative models |
title_full_unstemmed | Learning affordances in object-centric generative models |
title_short | Learning affordances in object-centric generative models |
title_sort | learning affordances in object centric generative models |
work_keys_str_mv | AT wuy learningaffordancesinobjectcentricgenerativemodels AT kasewas learningaffordancesinobjectcentricgenerativemodels AT grotho learningaffordancesinobjectcentricgenerativemodels AT salters learningaffordancesinobjectcentricgenerativemodels AT sunl learningaffordancesinobjectcentricgenerativemodels AT parkerjoneso learningaffordancesinobjectcentricgenerativemodels AT posnerh learningaffordancesinobjectcentricgenerativemodels |