Learning State and Action Abstractions for Effective and Efficient Planning

An autonomous agent should make good decisions quickly. These two considerations --- effectiveness and efficiency --- are especially important, and often competing, when an agent plans to make decisions sequentially in long-horizon tasks. Unfortunately, planning directly in the state and action spac...

Full description

Bibliographic Details
Main Author: Chitnis, Rohan
Other Authors: Kaelbling, Leslie P.
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/145150
_version_ 1826193613412892672
author Chitnis, Rohan
author2 Kaelbling, Leslie P.
author_facet Kaelbling, Leslie P.
Chitnis, Rohan
author_sort Chitnis, Rohan
collection MIT
description An autonomous agent should make good decisions quickly. These two considerations --- effectiveness and efficiency --- are especially important, and often competing, when an agent plans to make decisions sequentially in long-horizon tasks. Unfortunately, planning directly in the state and action spaces of a task is intractable for many tasks of interest. Abstractions offer a mechanism for overcoming this intractability, allowing the agent to reason at a higher level about the most salient aspects of a task. In this thesis, we develop novel frameworks for learning state and action abstractions that are optimized for both effective and efficient planning. Most generally, state and action abstractions are arbitrary transformations of the state and action spaces of the given planning problem; we focus on task-specific abstractions that leverage the structure of a given task (or family of tasks) to make planning efficient. Throughout the chapters, we show how to learn neuro-symbolic abstractions for bilevel planning; present a method for learning to generate context-specific abstractions of Markov decision processes; formalize and give a tractable algorithm for reasoning efficiently about relevant exogenous processes in a Markov decision process; and introduce a powerful and general mechanism for planning in large problem instances containing many objects. We demonstrate across both classical and robotics planning tasks, using a wide variety of planners, that the methods we present optimize a tradeoff between planning effectively and planning efficiently.
first_indexed 2024-09-23T09:41:53Z
format Thesis
id mit-1721.1/145150
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T09:41:53Z
publishDate 2022
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1451502022-08-30T03:54:40Z Learning State and Action Abstractions for Effective and Efficient Planning Chitnis, Rohan Kaelbling, Leslie P. Lozano-Pérez, Tomás Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science An autonomous agent should make good decisions quickly. These two considerations --- effectiveness and efficiency --- are especially important, and often competing, when an agent plans to make decisions sequentially in long-horizon tasks. Unfortunately, planning directly in the state and action spaces of a task is intractable for many tasks of interest. Abstractions offer a mechanism for overcoming this intractability, allowing the agent to reason at a higher level about the most salient aspects of a task. In this thesis, we develop novel frameworks for learning state and action abstractions that are optimized for both effective and efficient planning. Most generally, state and action abstractions are arbitrary transformations of the state and action spaces of the given planning problem; we focus on task-specific abstractions that leverage the structure of a given task (or family of tasks) to make planning efficient. Throughout the chapters, we show how to learn neuro-symbolic abstractions for bilevel planning; present a method for learning to generate context-specific abstractions of Markov decision processes; formalize and give a tractable algorithm for reasoning efficiently about relevant exogenous processes in a Markov decision process; and introduce a powerful and general mechanism for planning in large problem instances containing many objects. We demonstrate across both classical and robotics planning tasks, using a wide variety of planners, that the methods we present optimize a tradeoff between planning effectively and planning efficiently. Ph.D. 2022-08-29T16:36:34Z 2022-08-29T16:36:34Z 2022-05 2022-06-21T19:15:14.269Z Thesis https://hdl.handle.net/1721.1/145150 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Chitnis, Rohan
Learning State and Action Abstractions for Effective and Efficient Planning
title Learning State and Action Abstractions for Effective and Efficient Planning
title_full Learning State and Action Abstractions for Effective and Efficient Planning
title_fullStr Learning State and Action Abstractions for Effective and Efficient Planning
title_full_unstemmed Learning State and Action Abstractions for Effective and Efficient Planning
title_short Learning State and Action Abstractions for Effective and Efficient Planning
title_sort learning state and action abstractions for effective and efficient planning
url https://hdl.handle.net/1721.1/145150
work_keys_str_mv AT chitnisrohan learningstateandactionabstractionsforeffectiveandefficientplanning