Decentralized control of Partially Observable Markov Decision Processes using belief space macro-actions
The focus of this paper is on solving multi-robot planning problems in continuous spaces with partial observability. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems, but representing and solving Dec-POMDPs is often in...
Main Authors: | Omidshafiei, Shayegan, Aghamohammadi, Aliakbar, Amato, Christopher, How, Jonathan P |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2018
|
Online Access: | http://hdl.handle.net/1721.1/116391 https://orcid.org/0000-0003-0903-0137 https://orcid.org/0000-0002-6786-7384 https://orcid.org/0000-0001-8576-1930 |
Similar Items
-
Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions
by: Omidshafiei, Shayegan, et al.
Published: (2015) -
Decentralized control of multi-robot systems using partially observable Markov Decision Processes and belief space macro-actions
by: Omidshafiei, Shayegan
Published: (2016) -
Learning for multi-robot cooperation in partially observable stochastic environments with macro-actions
by: Amato, Christopher, et al.
Published: (2018) -
Semantic-level decentralized multi-robot decision-making using probabilistic macro-observations
by: Amato, Christopher, et al.
Published: (2018) -
Scalable accelerated decentralized multi-robot policy search in continuous observation spaces
by: Amato, Christopher, et al.
Published: (2018)