Learning to communicate with Deep multi-agent reinforcement learning
We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Published: |
Massachusetts Institute of Technology Press
2016
|
_version_ | 1797097690415235072 |
---|---|
author | Foerster, J Assael, Y de Freitas, N Whiteson, S |
author_facet | Foerster, J Assael, Y de Freitas, N Whiteson, S |
author_sort | Foerster, J |
collection | OXFORD |
description | We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains. |
first_indexed | 2024-03-07T04:59:06Z |
format | Conference item |
id | oxford-uuid:d7a5974c-b7bc-469e-b76f-0631aad58f7b |
institution | University of Oxford |
last_indexed | 2024-03-07T04:59:06Z |
publishDate | 2016 |
publisher | Massachusetts Institute of Technology Press |
record_format | dspace |
spelling | oxford-uuid:d7a5974c-b7bc-469e-b76f-0631aad58f7b2022-03-27T08:42:35ZLearning to communicate with Deep multi-agent reinforcement learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:d7a5974c-b7bc-469e-b76f-0631aad58f7bSymplectic Elements at OxfordMassachusetts Institute of Technology Press2016Foerster, JAssael, Yde Freitas, NWhiteson, SWe consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains. |
spellingShingle | Foerster, J Assael, Y de Freitas, N Whiteson, S Learning to communicate with Deep multi-agent reinforcement learning |
title | Learning to communicate with Deep multi-agent reinforcement learning |
title_full | Learning to communicate with Deep multi-agent reinforcement learning |
title_fullStr | Learning to communicate with Deep multi-agent reinforcement learning |
title_full_unstemmed | Learning to communicate with Deep multi-agent reinforcement learning |
title_short | Learning to communicate with Deep multi-agent reinforcement learning |
title_sort | learning to communicate with deep multi agent reinforcement learning |
work_keys_str_mv | AT foersterj learningtocommunicatewithdeepmultiagentreinforcementlearning AT assaely learningtocommunicatewithdeepmultiagentreinforcementlearning AT defreitasn learningtocommunicatewithdeepmultiagentreinforcementlearning AT whitesons learningtocommunicatewithdeepmultiagentreinforcementlearning |