Learning affordances in object-centric generative models
Given visual observations of a reaching task together with a stick-like tool, we propose a novel approach that learns to exploit task-relevant object affordances by combining generative modelling with a task-based performance predictor. The embedding learned by the generative model captures the fact...
Κύριοι συγγραφείς: | Wu, Y, Kasewa, S, Groth, O, Salter, S, Sun, L, Parker Jones, O, Posner, H |
---|---|
Μορφή: | Conference item |
Γλώσσα: | English |
Έκδοση: |
International Conference on Machine Learning
2020
|
Παρόμοια τεκμήρια
-
Reconstruction bottlenecks in object-centric generative models
ανά: Engelcke, M, κ.ά.
Έκδοση: (2020) -
GENESIS: generative scene inference and sampling of object-centric latent representations
ανά: Engelcke, M, κ.ά.
Έκδοση: (2020) -
APEX: Unsupervised, object-centric scene segmentation and tracking for robot manipulation
ανά: Wu, Y, κ.ά.
Έκδοση: (2021) -
Object-centric generative models for robot perception and action
ανά: Wu, Y
Έκδοση: (2023) -
DreamUp3D: object-centric generative models for single-view 3D scene understanding and real-to-sim transfer
ανά: Wu, Y, κ.ά.
Έκδοση: (2024)