Social interaction for efficient agent learning from human reward

Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trai...

Full description

Bibliographic Details
Main Authors: Li, G, Whiteson, S, Knox, W, Hung, H
Format: Journal article
Published: Springer US 2017
_version_ 1797087952996663296
author Li, G
Whiteson, S
Knox, W
Hung, H
author_facet Li, G
Whiteson, S
Knox, W
Hung, H
author_sort Li, G
collection OXFORD
description Learning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent’s socio-competitive feedback on the human trainer’s training behavior and the agent’s learning. The results of our user study with 85 participants suggest that the agent’s passive socio-competitive feedback — showing performance and score of agents trained by trainers in a leaderboard — substantially increases the engagement of the participants in the game task and improves the agents’ performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active — sending the trainer her agent’s performance relative to others — further induces more participants to train agents longer and improves the agent’s learning. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer’s perspective and the agent’s additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.
first_indexed 2024-03-07T02:42:55Z
format Journal article
id oxford-uuid:ab129789-3b68-4fdf-a7c9-e94cf8fae106
institution University of Oxford
last_indexed 2024-03-07T02:42:55Z
publishDate 2017
publisher Springer US
record_format dspace
spelling oxford-uuid:ab129789-3b68-4fdf-a7c9-e94cf8fae1062022-03-27T03:19:27ZSocial interaction for efficient agent learning from human rewardJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:ab129789-3b68-4fdf-a7c9-e94cf8fae106Symplectic Elements at OxfordSpringer US2017Li, GWhiteson, SKnox, WHung, HLearning from rewards generated by a human trainer observing an agent in action has been proven to be a powerful method for teaching autonomous agents to perform challenging tasks, especially for those non-technical users. Since the efficacy of this approach depends critically on the reward the trainer provides, we consider how the interaction between the trainer and the agent should be designed so as to increase the efficiency of the training process. This article investigates the influence of the agent’s socio-competitive feedback on the human trainer’s training behavior and the agent’s learning. The results of our user study with 85 participants suggest that the agent’s passive socio-competitive feedback — showing performance and score of agents trained by trainers in a leaderboard — substantially increases the engagement of the participants in the game task and improves the agents’ performance, even though the participants do not directly play the game but instead train the agent to do so. Moreover, making this feedback active — sending the trainer her agent’s performance relative to others — further induces more participants to train agents longer and improves the agent’s learning. Our further analysis shows that agents trained by trainers affected by both the passive and active social feedback could obtain a higher performance under a score mechanism that could be optimized from the trainer’s perspective and the agent’s additional active social feedback can keep participants to further train agents to learn policies that can obtain a higher performance under such a score mechanism.
spellingShingle Li, G
Whiteson, S
Knox, W
Hung, H
Social interaction for efficient agent learning from human reward
title Social interaction for efficient agent learning from human reward
title_full Social interaction for efficient agent learning from human reward
title_fullStr Social interaction for efficient agent learning from human reward
title_full_unstemmed Social interaction for efficient agent learning from human reward
title_short Social interaction for efficient agent learning from human reward
title_sort social interaction for efficient agent learning from human reward
work_keys_str_mv AT lig socialinteractionforefficientagentlearningfromhumanreward
AT whitesons socialinteractionforefficientagentlearningfromhumanreward
AT knoxw socialinteractionforefficientagentlearningfromhumanreward
AT hungh socialinteractionforefficientagentlearningfromhumanreward