Formal contracts mitigate social dilemmas in multi-agent reinforcement learning
Multi-agent Reinforcement Learning (MARL) is a powerful tool for training autonomous agents acting independently in a common environment. However, it can lead to sub-optimal behavior when individual incentives and group incentives diverge. Humans are remarkably capable at solving these social dilemm...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer US
2024
|
Online Access: | https://hdl.handle.net/1721.1/157416 |
_version_ | 1824458431351750656 |
---|---|
author | Haupt, Andreas Christoffersen, Phillip Damani, Mehul Hadfield-Menell, Dylan |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Haupt, Andreas Christoffersen, Phillip Damani, Mehul Hadfield-Menell, Dylan |
author_sort | Haupt, Andreas |
collection | MIT |
description | Multi-agent Reinforcement Learning (MARL) is a powerful tool for training autonomous agents acting independently in a common environment. However, it can lead to sub-optimal behavior when individual incentives and group incentives diverge. Humans are remarkably capable at solving these social dilemmas. It is an open problem in MARL to replicate such cooperative behaviors in selfish agents. In this work, we draw upon the idea of formal contracting from economics to overcome diverging incentives between agents in MARL. We propose an augmentation to a Markov game where agents voluntarily agree to binding transfers of reward, under pre-specified conditions. Our contributions are theoretical and empirical. First, we show that this augmentation makes all subgame-perfect equilibria of all Fully Observable Markov Games exhibit socially optimal behavior, given a sufficiently rich space of contracts. Next, we show that for general contract spaces, and even under partial observability, richer contract spaces lead to higher welfare. Hence, contract space design solves an exploration-exploitation tradeoff, sidestepping incentive issues. We complement our theoretical analysis with experiments. Issues of exploration in the contracting augmentation are mitigated using a training methodology inspired by multi-objective reinforcement learning: Multi-Objective Contract Augmentation Learning. We test our methodology in static, single-move games, as well as dynamic domains that simulate traffic, pollution management, and common pool resource management. |
first_indexed | 2025-02-19T04:25:47Z |
format | Article |
id | mit-1721.1/157416 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2025-02-19T04:25:47Z |
publishDate | 2024 |
publisher | Springer US |
record_format | dspace |
spelling | mit-1721.1/1574162025-02-14T16:08:13Z Formal contracts mitigate social dilemmas in multi-agent reinforcement learning Haupt, Andreas Christoffersen, Phillip Damani, Mehul Hadfield-Menell, Dylan Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Multi-agent Reinforcement Learning (MARL) is a powerful tool for training autonomous agents acting independently in a common environment. However, it can lead to sub-optimal behavior when individual incentives and group incentives diverge. Humans are remarkably capable at solving these social dilemmas. It is an open problem in MARL to replicate such cooperative behaviors in selfish agents. In this work, we draw upon the idea of formal contracting from economics to overcome diverging incentives between agents in MARL. We propose an augmentation to a Markov game where agents voluntarily agree to binding transfers of reward, under pre-specified conditions. Our contributions are theoretical and empirical. First, we show that this augmentation makes all subgame-perfect equilibria of all Fully Observable Markov Games exhibit socially optimal behavior, given a sufficiently rich space of contracts. Next, we show that for general contract spaces, and even under partial observability, richer contract spaces lead to higher welfare. Hence, contract space design solves an exploration-exploitation tradeoff, sidestepping incentive issues. We complement our theoretical analysis with experiments. Issues of exploration in the contracting augmentation are mitigated using a training methodology inspired by multi-objective reinforcement learning: Multi-Objective Contract Augmentation Learning. We test our methodology in static, single-move games, as well as dynamic domains that simulate traffic, pollution management, and common pool resource management. 2024-10-24T20:46:42Z 2024-10-24T20:46:42Z 2024-10-18 2024-10-20T03:22:42Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/157416 Haupt, A., Christoffersen, P., Damani, M. et al. Formal contracts mitigate social dilemmas in multi-agent reinforcement learning. Auton Agent Multi-Agent Syst 38, 51 (2024). PUBLISHER_CC en https://doi.org/10.1007/s10458-024-09682-5 Autonomous Agents and Multi-Agent Systems Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The Author(s) application/pdf Springer US Springer US |
spellingShingle | Haupt, Andreas Christoffersen, Phillip Damani, Mehul Hadfield-Menell, Dylan Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |
title | Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |
title_full | Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |
title_fullStr | Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |
title_full_unstemmed | Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |
title_short | Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |
title_sort | formal contracts mitigate social dilemmas in multi agent reinforcement learning |
url | https://hdl.handle.net/1721.1/157416 |
work_keys_str_mv | AT hauptandreas formalcontractsmitigatesocialdilemmasinmultiagentreinforcementlearning AT christoffersenphillip formalcontractsmitigatesocialdilemmasinmultiagentreinforcementlearning AT damanimehul formalcontractsmitigatesocialdilemmasinmultiagentreinforcementlearning AT hadfieldmenelldylan formalcontractsmitigatesocialdilemmasinmultiagentreinforcementlearning |