Efficient Planning under Uncertainty with Macro-actions

Deciding how to act in partially observable environments remains an active area of research. Identifying good sequences of decisions is particularly challenging when good control performance requires planning multiple steps into the future in domains with many states. Towards addressing this chal...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: He, Ruijie, Brunskill, Emma, Roy, Nicholas
Tác giả khác: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Định dạng: Bài viết
Ngôn ngữ:en_US
Được phát hành: AI Access Foundation 2011
Truy cập trực tuyến:http://hdl.handle.net/1721.1/64741
https://orcid.org/0000-0002-8293-0492
Miêu tả
Tóm tắt:Deciding how to act in partially observable environments remains an active area of research. Identifying good sequences of decisions is particularly challenging when good control performance requires planning multiple steps into the future in domains with many states. Towards addressing this challenge, we present an online, forward-search algorithm called the Posterior Belief Distribution (PBD). PBD leverages a novel method for calculating the posterior distribution over beliefs that result after a sequence of actions is taken, given the set of observation sequences that could be received during this process. This method allows us to efficiently evaluate the expected reward of a sequence of primitive actions, which we refer to as macro-actions. We present a formal analysis of our approach, and examine its performance on two very large simulation experiments: scientific exploration and a target monitoring domain. We also demonstrate our algorithm being used to control a real robotic helicopter in a target monitoring experiment, which suggests that our approach has practical potential for planning in real-world, large partially observable domains where a multi-step look ahead is required to achieve good performance.