Mutual alignment transfer learning

Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be...

Full description

Bibliographic Details
Main Authors: Wulfmeier, M, Posner, H, Abbeel, P
Format: Conference item
Published: Journal of Machine Learning Research 2017
_version_ 1797061185248428032
author Wulfmeier, M
Posner, H
Abbeel, P
author_facet Wulfmeier, M
Posner, H
Abbeel, P
author_sort Wulfmeier, M
collection OXFORD
description Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach – supplemental to fine tuning on the real robot – to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.
first_indexed 2024-03-06T20:27:27Z
format Conference item
id oxford-uuid:2fdcbdf1-f2c3-41bc-8af9-d11470cf033c
institution University of Oxford
last_indexed 2024-03-06T20:27:27Z
publishDate 2017
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:2fdcbdf1-f2c3-41bc-8af9-d11470cf033c2022-03-26T12:58:03ZMutual alignment transfer learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:2fdcbdf1-f2c3-41bc-8af9-d11470cf033cSymplectic Elements at OxfordJournal of Machine Learning Research2017Wulfmeier, MPosner, HAbbeel, PTraining robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach – supplemental to fine tuning on the real robot – to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.
spellingShingle Wulfmeier, M
Posner, H
Abbeel, P
Mutual alignment transfer learning
title Mutual alignment transfer learning
title_full Mutual alignment transfer learning
title_fullStr Mutual alignment transfer learning
title_full_unstemmed Mutual alignment transfer learning
title_short Mutual alignment transfer learning
title_sort mutual alignment transfer learning
work_keys_str_mv AT wulfmeierm mutualalignmenttransferlearning
AT posnerh mutualalignmenttransferlearning
AT abbeelp mutualalignmenttransferlearning