On-the-fly strategy adaptation for ad-hoc agent coordination

Training agents in cooperative settings offers the promise of AI agents able to interact effectively with humans (and other agents) in the real world. Multi-agent reinforcement learning (MARL) has the potential to achieve this goal, demonstrating success in a series of challenging problems. However,...

Full description

Bibliographic Details
Main Authors: Zand, J, Parker-Holder, J, Roberts, SJ
Format: Conference item
Language:English
Published: International Foundation for Autonomous Agents and Multiagent Systems 2022
_version_ 1797108728180244480
author Zand, J
Parker-Holder, J
Roberts, SJ
author_facet Zand, J
Parker-Holder, J
Roberts, SJ
author_sort Zand, J
collection OXFORD
description Training agents in cooperative settings offers the promise of AI agents able to interact effectively with humans (and other agents) in the real world. Multi-agent reinforcement learning (MARL) has the potential to achieve this goal, demonstrating success in a series of challenging problems. However, whilst these advances are significant, the vast majority of focus has been on the self-play paradigm. This often results in a coordination problem, caused by agents learning to make use of arbitrary conventions when playing with themselves. This means that even the strongest self-play agents may have very low cross-play with other agents, including other initializations of the same algorithm. In this paper we propose to solve this problem by adapting agent strategies on the fly, using a posterior belief over the other agents' strategy. Concretely, we consider the problem of selecting a strategy from a finite set of previously trained agents, to play with an unknown partner. We propose an extension of the classic statistical technique, Gibbs sampling, to update beliefs about other agents and obtain close to optimal ad-hoc performance. Despite its simplicity, our method is able to achieve strong cross-play with unseen partners in the challenging card game of Hanabi, achieving successful ad-hoc coordination without knowledge of the partner's strategy a priori.
first_indexed 2024-03-07T07:32:41Z
format Conference item
id oxford-uuid:f391549d-a6f5-4b29-96d6-872d801cfcd8
institution University of Oxford
language English
last_indexed 2024-03-07T07:32:41Z
publishDate 2022
publisher International Foundation for Autonomous Agents and Multiagent Systems
record_format dspace
spelling oxford-uuid:f391549d-a6f5-4b29-96d6-872d801cfcd82023-01-23T15:31:21ZOn-the-fly strategy adaptation for ad-hoc agent coordinationConference itemhttp://purl.org/coar/resource_type/c_5794uuid:f391549d-a6f5-4b29-96d6-872d801cfcd8EnglishSymplectic ElementsInternational Foundation for Autonomous Agents and Multiagent Systems2022Zand, JParker-Holder, JRoberts, SJTraining agents in cooperative settings offers the promise of AI agents able to interact effectively with humans (and other agents) in the real world. Multi-agent reinforcement learning (MARL) has the potential to achieve this goal, demonstrating success in a series of challenging problems. However, whilst these advances are significant, the vast majority of focus has been on the self-play paradigm. This often results in a coordination problem, caused by agents learning to make use of arbitrary conventions when playing with themselves. This means that even the strongest self-play agents may have very low cross-play with other agents, including other initializations of the same algorithm. In this paper we propose to solve this problem by adapting agent strategies on the fly, using a posterior belief over the other agents' strategy. Concretely, we consider the problem of selecting a strategy from a finite set of previously trained agents, to play with an unknown partner. We propose an extension of the classic statistical technique, Gibbs sampling, to update beliefs about other agents and obtain close to optimal ad-hoc performance. Despite its simplicity, our method is able to achieve strong cross-play with unseen partners in the challenging card game of Hanabi, achieving successful ad-hoc coordination without knowledge of the partner's strategy a priori.
spellingShingle Zand, J
Parker-Holder, J
Roberts, SJ
On-the-fly strategy adaptation for ad-hoc agent coordination
title On-the-fly strategy adaptation for ad-hoc agent coordination
title_full On-the-fly strategy adaptation for ad-hoc agent coordination
title_fullStr On-the-fly strategy adaptation for ad-hoc agent coordination
title_full_unstemmed On-the-fly strategy adaptation for ad-hoc agent coordination
title_short On-the-fly strategy adaptation for ad-hoc agent coordination
title_sort on the fly strategy adaptation for ad hoc agent coordination
work_keys_str_mv AT zandj ontheflystrategyadaptationforadhocagentcoordination
AT parkerholderj ontheflystrategyadaptationforadhocagentcoordination
AT robertssj ontheflystrategyadaptationforadhocagentcoordination