Multi-agent common knowledge reinforcement learning

Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises natura...

Full description

Bibliographic Details
Main Authors: de Witt, C, Foerster, J, Farquhar, G, Torr, PHS, Boehmer, W, Whiteson, S
Format: Conference item
Language:English
Published: Massachusetts Institute of Technology Press 2019
_version_ 1826280458252451840
author de Witt, C
Foerster, J
Farquhar, G
Torr, PHS
Boehmer, W
Whiteson, S
author_facet de Witt, C
Foerster, J
Farquhar, G
Torr, PHS
Boehmer, W
Whiteson, S
author_sort de Witt, C
collection OXFORD
description Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents can independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree. Higher levels in the hierarchy coordinate groups of agents by conditioning on their common knowledge, or delegate to lower levels with smaller subgroups but potentially richer common knowledge. The entire policy tree can be executed in a fully decentralised fashion. As the lowest policy tree level consists of independent policies for each agent, MACKRL reduces to independently learnt decentralised policies as a special case. We demonstrate that our method can exploit common knowledge for superior performance on complex decentralised coordination tasks, including a stochastic matrix game and challenging problems in StarCraft II unit micromanagement.
first_indexed 2024-03-07T00:14:01Z
format Conference item
id oxford-uuid:7a34982f-e935-4ecb-b687-71b4c4ee814a
institution University of Oxford
language English
last_indexed 2024-03-07T00:14:01Z
publishDate 2019
publisher Massachusetts Institute of Technology Press
record_format dspace
spelling oxford-uuid:7a34982f-e935-4ecb-b687-71b4c4ee814a2022-03-26T20:42:27ZMulti-agent common knowledge reinforcement learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:7a34982f-e935-4ecb-b687-71b4c4ee814aEnglishSymplectic Elements at OxfordMassachusetts Institute of Technology Press2019de Witt, CFoerster, JFarquhar, GTorr, PHSBoehmer, WWhiteson, SCooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents can independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree. Higher levels in the hierarchy coordinate groups of agents by conditioning on their common knowledge, or delegate to lower levels with smaller subgroups but potentially richer common knowledge. The entire policy tree can be executed in a fully decentralised fashion. As the lowest policy tree level consists of independent policies for each agent, MACKRL reduces to independently learnt decentralised policies as a special case. We demonstrate that our method can exploit common knowledge for superior performance on complex decentralised coordination tasks, including a stochastic matrix game and challenging problems in StarCraft II unit micromanagement.
spellingShingle de Witt, C
Foerster, J
Farquhar, G
Torr, PHS
Boehmer, W
Whiteson, S
Multi-agent common knowledge reinforcement learning
title Multi-agent common knowledge reinforcement learning
title_full Multi-agent common knowledge reinforcement learning
title_fullStr Multi-agent common knowledge reinforcement learning
title_full_unstemmed Multi-agent common knowledge reinforcement learning
title_short Multi-agent common knowledge reinforcement learning
title_sort multi agent common knowledge reinforcement learning
work_keys_str_mv AT dewittc multiagentcommonknowledgereinforcementlearning
AT foersterj multiagentcommonknowledgereinforcementlearning
AT farquharg multiagentcommonknowledgereinforcementlearning
AT torrphs multiagentcommonknowledgereinforcementlearning
AT boehmerw multiagentcommonknowledgereinforcementlearning
AT whitesons multiagentcommonknowledgereinforcementlearning