FACMAC: Factored multi−agent centralised policy gradients
We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces. Like MADDPG, a popular multi-agent actor-critic method, our approach uses deep deterministic policy gradients to learn...
Main Authors: | , , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
NeurIPS
2022
|
_version_ | 1797106698004987904 |
---|---|
author | Peng, B Rashid, T Schroeder de Witt, CA Kamienny, P-A Torr, PHS Böhmer, W Whiteson, S |
author_facet | Peng, B Rashid, T Schroeder de Witt, CA Kamienny, P-A Torr, PHS Böhmer, W Whiteson, S |
author_sort | Peng, B |
collection | OXFORD |
description | We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces. Like MADDPG, a popular multi-agent actor-critic method, our approach uses deep deterministic policy gradients to learn policies. However, FACMAC learns a centralised but factored critic, which combines per-agent utilities into the joint action-value function via a non-linear monotonic function, as in QMIX, a popular multi-agent Q-learning algorithm. However, unlike QMIX, there are no inherent constraints on factoring the critic. We thus also employ a nonmonotonic factorisation and empirically demonstrate that its increased representational capacity allows it to solve some tasks that cannot be solved with monolithic, or monotonically factored critics. In addition, FACMAC uses a centralised policy gradient estimator that optimises over the entire joint action space, rather than optimising over each agent's action space separately as in MADDPG. This allows for more coordinated policy changes and fully reaps the benefits of a centralised critic. We evaluate FACMAC on variants of the multi-agent particle environments, a novel multi-agent MuJoCo benchmark, and a challenging set of StarCraft II micromanagement tasks. Empirical results demonstrate FACMAC's superior performance over MADDPG and other baselines on all three domains.
|
first_indexed | 2024-03-07T07:04:48Z |
format | Conference item |
id | oxford-uuid:75d97a36-dcc2-4932-bee7-fe319f683a57 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T07:04:48Z |
publishDate | 2022 |
publisher | NeurIPS |
record_format | dspace |
spelling | oxford-uuid:75d97a36-dcc2-4932-bee7-fe319f683a572022-04-28T09:08:23ZFACMAC: Factored multi−agent centralised policy gradientsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:75d97a36-dcc2-4932-bee7-fe319f683a57EnglishSymplectic ElementsNeurIPS2022Peng, BRashid, TSchroeder de Witt, CAKamienny, P-ATorr, PHSBöhmer, WWhiteson, SWe propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces. Like MADDPG, a popular multi-agent actor-critic method, our approach uses deep deterministic policy gradients to learn policies. However, FACMAC learns a centralised but factored critic, which combines per-agent utilities into the joint action-value function via a non-linear monotonic function, as in QMIX, a popular multi-agent Q-learning algorithm. However, unlike QMIX, there are no inherent constraints on factoring the critic. We thus also employ a nonmonotonic factorisation and empirically demonstrate that its increased representational capacity allows it to solve some tasks that cannot be solved with monolithic, or monotonically factored critics. In addition, FACMAC uses a centralised policy gradient estimator that optimises over the entire joint action space, rather than optimising over each agent's action space separately as in MADDPG. This allows for more coordinated policy changes and fully reaps the benefits of a centralised critic. We evaluate FACMAC on variants of the multi-agent particle environments, a novel multi-agent MuJoCo benchmark, and a challenging set of StarCraft II micromanagement tasks. Empirical results demonstrate FACMAC's superior performance over MADDPG and other baselines on all three domains. |
spellingShingle | Peng, B Rashid, T Schroeder de Witt, CA Kamienny, P-A Torr, PHS Böhmer, W Whiteson, S FACMAC: Factored multi−agent centralised policy gradients |
title | FACMAC: Factored multi−agent centralised policy gradients |
title_full | FACMAC: Factored multi−agent centralised policy gradients |
title_fullStr | FACMAC: Factored multi−agent centralised policy gradients |
title_full_unstemmed | FACMAC: Factored multi−agent centralised policy gradients |
title_short | FACMAC: Factored multi−agent centralised policy gradients |
title_sort | facmac factored multi agent centralised policy gradients |
work_keys_str_mv | AT pengb facmacfactoredmultiagentcentralisedpolicygradients AT rashidt facmacfactoredmultiagentcentralisedpolicygradients AT schroederdewittca facmacfactoredmultiagentcentralisedpolicygradients AT kamiennypa facmacfactoredmultiagentcentralisedpolicygradients AT torrphs facmacfactoredmultiagentcentralisedpolicygradients AT bohmerw facmacfactoredmultiagentcentralisedpolicygradients AT whitesons facmacfactoredmultiagentcentralisedpolicygradients |