Learning sparse relational transition models
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over prop...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
2021
|
Online Access: | https://hdl.handle.net/1721.1/132315 |
_version_ | 1826210836968898560 |
---|---|
author | Xia, V Wang, Z Allen, K Silver, T Kaelbling, LP |
author_facet | Xia, V Wang, Z Allen, K Silver, T Kaelbling, LP |
author_sort | Xia, V |
collection | MIT |
description | © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table. |
first_indexed | 2024-09-23T14:56:13Z |
format | Article |
id | mit-1721.1/132315 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T14:56:13Z |
publishDate | 2021 |
record_format | dspace |
spelling | mit-1721.1/1323152021-09-21T03:16:25Z Learning sparse relational transition models Xia, V Wang, Z Allen, K Silver, T Kaelbling, LP © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the resulting state given their properties in the previous state. An iterative greedy algorithm is used to construct a set of deictic references that determine which objects are relevant in any given state. Feed-forward neural networks are used to learn the transition distribution on the relevant objects' properties. This strategy is demonstrated to be both more versatile and more sample efficient than learning a monolithic transition model in a simulated domain in which a robot pushes stacks of objects on a cluttered table. 2021-09-20T18:21:48Z 2021-09-20T18:21:48Z 2020-12-22T16:20:59Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/132315 en https://openreview.net/forum?id=SJxsV2R5FQ 7th International Conference on Learning Representations, ICLR 2019 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf MIT web domain |
spellingShingle | Xia, V Wang, Z Allen, K Silver, T Kaelbling, LP Learning sparse relational transition models |
title | Learning sparse relational transition models |
title_full | Learning sparse relational transition models |
title_fullStr | Learning sparse relational transition models |
title_full_unstemmed | Learning sparse relational transition models |
title_short | Learning sparse relational transition models |
title_sort | learning sparse relational transition models |
url | https://hdl.handle.net/1721.1/132315 |
work_keys_str_mv | AT xiav learningsparserelationaltransitionmodels AT wangz learningsparserelationaltransitionmodels AT allenk learningsparserelationaltransitionmodels AT silvert learningsparserelationaltransitionmodels AT kaelblinglp learningsparserelationaltransitionmodels |