Modeling and Planning with Macro-Actions in Decentralized POMDPs

© 2019 AI Access Foundation. All rights reserved. Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized multi-agent decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each agent’s ac...

Full description

Bibliographic Details
Main Authors: Amato, Christopher, Konidaris, George, Kaelbling, Leslie P, How, Jonathan P
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: AI Access Foundation 2021
Online Access:https://hdl.handle.net/1721.1/132314
_version_ 1826204523827298304
author Amato, Christopher
Konidaris, George
Kaelbling, Leslie P
How, Jonathan P
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Amato, Christopher
Konidaris, George
Kaelbling, Leslie P
How, Jonathan P
author_sort Amato, Christopher
collection MIT
description © 2019 AI Access Foundation. All rights reserved. Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized multi-agent decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each agent’s actions are primitive operations lasting exactly one time step. We address the case where each agent has macro-actions: temporally extended actions that may require different amounts of time to execute. We model macro-actions as options in a Dec-POMDP, focusing on actions that depend only on information directly available to the agent during execution. Therefore, we model systems where coordination decisions only occur at the level of deciding which macro-actions to execute. The core technical difficulty in this setting is that the options chosen by each agent no longer terminate at the same time. We extend three leading Dec-POMDP algorithms for policy generation to the macro-action case, and demonstrate their effectiveness in both standard benchmarks and a multi-robot coordination problem. The results show that our new algorithms retain agent coordination while allowing high-quality solutions to be generated for significantly longer horizons and larger state-spaces than previous Dec-POMDP methods. Furthermore, in the multi-robot domain, we show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit opportunities for coordination while balancing uncertainty, sensor information, and information about other agents.
first_indexed 2024-09-23T12:56:53Z
format Article
id mit-1721.1/132314
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:56:53Z
publishDate 2021
publisher AI Access Foundation
record_format dspace
spelling mit-1721.1/1323142023-12-12T16:03:29Z Modeling and Planning with Macro-Actions in Decentralized POMDPs Amato, Christopher Konidaris, George Kaelbling, Leslie P How, Jonathan P Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Laboratory for Information and Decision Systems © 2019 AI Access Foundation. All rights reserved. Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized multi-agent decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each agent’s actions are primitive operations lasting exactly one time step. We address the case where each agent has macro-actions: temporally extended actions that may require different amounts of time to execute. We model macro-actions as options in a Dec-POMDP, focusing on actions that depend only on information directly available to the agent during execution. Therefore, we model systems where coordination decisions only occur at the level of deciding which macro-actions to execute. The core technical difficulty in this setting is that the options chosen by each agent no longer terminate at the same time. We extend three leading Dec-POMDP algorithms for policy generation to the macro-action case, and demonstrate their effectiveness in both standard benchmarks and a multi-robot coordination problem. The results show that our new algorithms retain agent coordination while allowing high-quality solutions to be generated for significantly longer horizons and larger state-spaces than previous Dec-POMDP methods. Furthermore, in the multi-robot domain, we show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit opportunities for coordination while balancing uncertainty, sensor information, and information about other agents. 2021-09-20T18:21:48Z 2021-09-20T18:21:48Z 2019 2020-12-22T18:27:57Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/132314 en 10.1613/JAIR.1.11418 Journal of Artificial Intelligence Research Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf AI Access Foundation PMC
spellingShingle Amato, Christopher
Konidaris, George
Kaelbling, Leslie P
How, Jonathan P
Modeling and Planning with Macro-Actions in Decentralized POMDPs
title Modeling and Planning with Macro-Actions in Decentralized POMDPs
title_full Modeling and Planning with Macro-Actions in Decentralized POMDPs
title_fullStr Modeling and Planning with Macro-Actions in Decentralized POMDPs
title_full_unstemmed Modeling and Planning with Macro-Actions in Decentralized POMDPs
title_short Modeling and Planning with Macro-Actions in Decentralized POMDPs
title_sort modeling and planning with macro actions in decentralized pomdps
url https://hdl.handle.net/1721.1/132314
work_keys_str_mv AT amatochristopher modelingandplanningwithmacroactionsindecentralizedpomdps
AT konidarisgeorge modelingandplanningwithmacroactionsindecentralizedpomdps
AT kaelblinglesliep modelingandplanningwithmacroactionsindecentralizedpomdps
AT howjonathanp modelingandplanningwithmacroactionsindecentralizedpomdps