Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning

QMIX is a popular Q-learning algorithm for cooperative MARL in the centralised training and decentralised execution paradigm. In order to enable easy decentralisation, QMIX restricts the joint action Q-values it can represent to be a monotonic mixing of each agent’s utilities. However, this restrict...

Full description

Bibliographic Details
Main Authors: Rashid, T, Farquhar, G, Peng, B, Whiteson, S
Format: Conference item
Language:English
Published: NeurIPS 2020
_version_ 1826286331612889088
author Rashid, T
Farquhar, G
Peng, B
Whiteson, S
author_facet Rashid, T
Farquhar, G
Peng, B
Whiteson, S
author_sort Rashid, T
collection OXFORD
description QMIX is a popular Q-learning algorithm for cooperative MARL in the centralised training and decentralised execution paradigm. In order to enable easy decentralisation, QMIX restricts the joint action Q-values it can represent to be a monotonic mixing of each agent’s utilities. However, this restriction prevents it from representing value functions in which an agent’s ordering over its actions can depend on other agents’ actions. To analyse this representational limitation, we first formalise the objective QMIX optimises, which allows us to view QMIX as an operator that first computes the Q-learning targets and then projects them into the space representable by QMIX. This projection returns a representable Q-value that minimises the unweighted squared error across all joint actions. We show in particular that this projection can fail to recover the optimal policy even with access to Q∗, which primarily stems from the equal weighting placed on each joint action. We rectify this by introducing a weighting into the projection, in order to place more importance on the better joint actions. We propose two weighting schemes and prove that they recover the correct maximal action for any joint action Q-values, and therefore for Q∗ as well. Based on our analysis and results in the tabular setting, we introduce two scalable versions of our algorithm, Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX and demonstrate improved performance on both predator-prey and challenging multi-agent StarCraft benchmark tasks [26].
first_indexed 2024-03-07T01:42:10Z
format Conference item
id oxford-uuid:973eda8b-12dd-42fe-ab43-92bcc3146a00
institution University of Oxford
language English
last_indexed 2024-03-07T01:42:10Z
publishDate 2020
publisher NeurIPS
record_format dspace
spelling oxford-uuid:973eda8b-12dd-42fe-ab43-92bcc3146a002022-03-26T23:58:10ZWeighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:973eda8b-12dd-42fe-ab43-92bcc3146a00EnglishSymplectic ElementsNeurIPS2020Rashid, TFarquhar, GPeng, BWhiteson, SQMIX is a popular Q-learning algorithm for cooperative MARL in the centralised training and decentralised execution paradigm. In order to enable easy decentralisation, QMIX restricts the joint action Q-values it can represent to be a monotonic mixing of each agent’s utilities. However, this restriction prevents it from representing value functions in which an agent’s ordering over its actions can depend on other agents’ actions. To analyse this representational limitation, we first formalise the objective QMIX optimises, which allows us to view QMIX as an operator that first computes the Q-learning targets and then projects them into the space representable by QMIX. This projection returns a representable Q-value that minimises the unweighted squared error across all joint actions. We show in particular that this projection can fail to recover the optimal policy even with access to Q∗, which primarily stems from the equal weighting placed on each joint action. We rectify this by introducing a weighting into the projection, in order to place more importance on the better joint actions. We propose two weighting schemes and prove that they recover the correct maximal action for any joint action Q-values, and therefore for Q∗ as well. Based on our analysis and results in the tabular setting, we introduce two scalable versions of our algorithm, Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX and demonstrate improved performance on both predator-prey and challenging multi-agent StarCraft benchmark tasks [26].
spellingShingle Rashid, T
Farquhar, G
Peng, B
Whiteson, S
Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning
title Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning
title_full Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning
title_fullStr Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning
title_full_unstemmed Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning
title_short Weighted QMIX: Expanding monotonic value function factorisation for deep multi−agent reinforcement learning
title_sort weighted qmix expanding monotonic value function factorisation for deep multi agent reinforcement learning
work_keys_str_mv AT rashidt weightedqmixexpandingmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT farquharg weightedqmixexpandingmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT pengb weightedqmixexpandingmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT whitesons weightedqmixexpandingmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning