QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning

In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication c...

Full description

Bibliographic Details
Main Authors: Rashid, T, Samvelyan, M, Schroeder de Witt, C, Farquhar, G, Foerster, J, Whiteson, S
Format: Conference item
Published: Journal of Machine Learning Research 2018
_version_ 1797067655647068160
author Rashid, T
Samvelyan, M
Schroeder de Witt, C
Farquhar, G
Foerster, J
Whiteson, S
author_facet Rashid, T
Samvelyan, M
Schroeder de Witt, C
Farquhar, G
Foerster, J
Whiteson, S
author_sort Rashid, T
collection OXFORD
description In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint actionvalues conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.
first_indexed 2024-03-06T21:59:28Z
format Conference item
id oxford-uuid:4e16ec00-f9e2-48ef-83fe-92e2b845fb87
institution University of Oxford
last_indexed 2024-03-06T21:59:28Z
publishDate 2018
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:4e16ec00-f9e2-48ef-83fe-92e2b845fb872022-03-26T15:59:09ZQMIX: Monotonic value function factorisation for deep multi-agent reinforcement learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:4e16ec00-f9e2-48ef-83fe-92e2b845fb87Symplectic Elements at OxfordJournal of Machine Learning Research2018Rashid, TSamvelyan, MSchroeder de Witt, CFarquhar, GFoerster, JWhiteson, SIn many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint actionvalues conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.
spellingShingle Rashid, T
Samvelyan, M
Schroeder de Witt, C
Farquhar, G
Foerster, J
Whiteson, S
QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
title QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
title_full QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
title_fullStr QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
title_full_unstemmed QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
title_short QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning
title_sort qmix monotonic value function factorisation for deep multi agent reinforcement learning
work_keys_str_mv AT rashidt qmixmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT samvelyanm qmixmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT schroederdewittc qmixmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT farquharg qmixmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT foersterj qmixmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning
AT whitesons qmixmonotonicvaluefunctionfactorisationfordeepmultiagentreinforcementlearning