Cross Apprenticeship Learning Framework: Properties and Solution Approaches

Apprenticeship learning is a framework in which an agent learns a policy to perform a given task in an environment using example trajectories provided by an expert. In the real world, one might have access to expert trajectories in different environments where system dynamics is different while the...

Full description

Bibliographic Details
Main Authors: Ashwin Aravind, Debasish Chatterjee, Ashish Cherukuri
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Open Journal of Control Systems
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10011555/
_version_ 1797384362870702080
author Ashwin Aravind
Debasish Chatterjee
Ashish Cherukuri
author_facet Ashwin Aravind
Debasish Chatterjee
Ashish Cherukuri
author_sort Ashwin Aravind
collection DOAJ
description Apprenticeship learning is a framework in which an agent learns a policy to perform a given task in an environment using example trajectories provided by an expert. In the real world, one might have access to expert trajectories in different environments where system dynamics is different while the learning task is the same. For such scenarios, two types of learning objectives can be defined. One where the learned policy performs very well in one specific environment and another when it performs well across all environments. To balance these two objectives in a principled way, our work presents the cross apprenticeship learning (CAL) framework. This consists of an optimization problem where an optimal policy for each environment is sought while ensuring that all policies remain close to each other. This nearness is facilitated by one tuning parameter in the optimization problem. We derive properties of the optimizers of the problem as the tuning parameter varies. We identify conditions under which an agent prefers using the policy obtained from CAL over the traditional apprenticeship learning. Since the CAL problem is nonconvex, we provide a convex outer approximation. Finally, we demonstrate the attributes of our framework in the context of a navigation task in a windy gridworld environment.
first_indexed 2024-03-08T21:35:25Z
format Article
id doaj.art-a09b8dcf93764f77a40d9366dcc917c4
institution Directory Open Access Journal
issn 2694-085X
language English
last_indexed 2024-03-08T21:35:25Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Open Journal of Control Systems
spelling doaj.art-a09b8dcf93764f77a40d9366dcc917c42023-12-21T00:02:17ZengIEEEIEEE Open Journal of Control Systems2694-085X2023-01-012364810.1109/OJCSYS.2023.323524810011555Cross Apprenticeship Learning Framework: Properties and Solution ApproachesAshwin Aravind0https://orcid.org/0000-0002-6412-5772Debasish Chatterjee1https://orcid.org/0000-0002-1718-653XAshish Cherukuri2https://orcid.org/0000-0002-7609-5080Department of Systems and Control Engineering, Indian Institute of Technology Bombay, Mumbai, IndiaDepartment of Systems and Control Engineering, Indian Institute of Technology Bombay, Mumbai, IndiaEngineering and Technology Institute Groningen, University of Groningen, Nijenborgh 4, Groningen, The NetherlandsApprenticeship learning is a framework in which an agent learns a policy to perform a given task in an environment using example trajectories provided by an expert. In the real world, one might have access to expert trajectories in different environments where system dynamics is different while the learning task is the same. For such scenarios, two types of learning objectives can be defined. One where the learned policy performs very well in one specific environment and another when it performs well across all environments. To balance these two objectives in a principled way, our work presents the cross apprenticeship learning (CAL) framework. This consists of an optimization problem where an optimal policy for each environment is sought while ensuring that all policies remain close to each other. This nearness is facilitated by one tuning parameter in the optimization problem. We derive properties of the optimizers of the problem as the tuning parameter varies. We identify conditions under which an agent prefers using the policy obtained from CAL over the traditional apprenticeship learning. Since the CAL problem is nonconvex, we provide a convex outer approximation. Finally, we demonstrate the attributes of our framework in the context of a navigation task in a windy gridworld environment.https://ieeexplore.ieee.org/document/10011555/Apprenticeship learningmultiagent systemsreinforcement learningstochastic control
spellingShingle Ashwin Aravind
Debasish Chatterjee
Ashish Cherukuri
Cross Apprenticeship Learning Framework: Properties and Solution Approaches
IEEE Open Journal of Control Systems
Apprenticeship learning
multiagent systems
reinforcement learning
stochastic control
title Cross Apprenticeship Learning Framework: Properties and Solution Approaches
title_full Cross Apprenticeship Learning Framework: Properties and Solution Approaches
title_fullStr Cross Apprenticeship Learning Framework: Properties and Solution Approaches
title_full_unstemmed Cross Apprenticeship Learning Framework: Properties and Solution Approaches
title_short Cross Apprenticeship Learning Framework: Properties and Solution Approaches
title_sort cross apprenticeship learning framework properties and solution approaches
topic Apprenticeship learning
multiagent systems
reinforcement learning
stochastic control
url https://ieeexplore.ieee.org/document/10011555/
work_keys_str_mv AT ashwinaravind crossapprenticeshiplearningframeworkpropertiesandsolutionapproaches
AT debasishchatterjee crossapprenticeshiplearningframeworkpropertiesandsolutionapproaches
AT ashishcherukuri crossapprenticeshiplearningframeworkpropertiesandsolutionapproaches