Universal Reinforcement Learning
We consider an agent interacting with an unmodeled environment. At each time, the agent makes an observation, takes an action, and incurs a cost. Its actions can influence future observations and costs. The goal is to minimize the long-term average cost. We propose a novel algorithm, known as the ac...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers
2010
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/59294 https://orcid.org/0000-0002-5856-9246 |
_version_ | 1826199090313035776 |
---|---|
author | Farias, Vivek F. Moallemi, Ciamac C. Van Roy, Benjamin Weissman, Tsachy |
author2 | Sloan School of Management |
author_facet | Sloan School of Management Farias, Vivek F. Moallemi, Ciamac C. Van Roy, Benjamin Weissman, Tsachy |
author_sort | Farias, Vivek F. |
collection | MIT |
description | We consider an agent interacting with an unmodeled environment. At each time, the agent makes an observation, takes an action, and incurs a cost. Its actions can influence future observations and costs. The goal is to minimize the long-term average cost. We propose a novel algorithm, known as the active LZ algorithm, for optimal control based on ideas from the Lempel-Ziv scheme for universal data compression and prediction. We establish that, under the active LZ algorithm, if there exists an integer K such that the future is conditionally independent of the past given a window of K consecutive actions and observations, then the average cost converges to the optimum. Experimental results involving the game of Rock-Paper-Scissors illustrate merits of the algorithm. |
first_indexed | 2024-09-23T11:14:13Z |
format | Article |
id | mit-1721.1/59294 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T11:14:13Z |
publishDate | 2010 |
publisher | Institute of Electrical and Electronics Engineers |
record_format | dspace |
spelling | mit-1721.1/592942022-09-27T18:04:50Z Universal Reinforcement Learning Farias, Vivek F. Moallemi, Ciamac C. Van Roy, Benjamin Weissman, Tsachy Sloan School of Management Farias, Vivek F. Farias, Vivek F. value iteration reinforcement learning optimal control dynamic programming Lempel-Ziv Context tree We consider an agent interacting with an unmodeled environment. At each time, the agent makes an observation, takes an action, and incurs a cost. Its actions can influence future observations and costs. The goal is to minimize the long-term average cost. We propose a novel algorithm, known as the active LZ algorithm, for optimal control based on ideas from the Lempel-Ziv scheme for universal data compression and prediction. We establish that, under the active LZ algorithm, if there exists an integer K such that the future is conditionally independent of the past given a window of K consecutive actions and observations, then the average cost converges to the optimum. Experimental results involving the game of Rock-Paper-Scissors illustrate merits of the algorithm. National Science Foundation (U.S.) (MKIDS Program grant ECS-9985229) Benchmark Stanford Graduate Fellowship 2010-10-13T19:43:17Z 2010-10-13T19:43:17Z 2010-04 2007-07 Article http://purl.org/eprint/type/JournalArticle 0018-9448 INSPEC Accession Number: 11256626 http://hdl.handle.net/1721.1/59294 Farias, V.F. et al. “Universal Reinforcement Learning.” Information Theory, IEEE Transactions on 56.5 (2010): 2441-2454. © Copyright 2010 IEEE https://orcid.org/0000-0002-5856-9246 en_US http://dx.doi.org/10.1109/TIT.2010.2043762 IEEE Transactions on Information Theory Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Institute of Electrical and Electronics Engineers IEEE |
spellingShingle | value iteration reinforcement learning optimal control dynamic programming Lempel-Ziv Context tree Farias, Vivek F. Moallemi, Ciamac C. Van Roy, Benjamin Weissman, Tsachy Universal Reinforcement Learning |
title | Universal Reinforcement Learning |
title_full | Universal Reinforcement Learning |
title_fullStr | Universal Reinforcement Learning |
title_full_unstemmed | Universal Reinforcement Learning |
title_short | Universal Reinforcement Learning |
title_sort | universal reinforcement learning |
topic | value iteration reinforcement learning optimal control dynamic programming Lempel-Ziv Context tree |
url | http://hdl.handle.net/1721.1/59294 https://orcid.org/0000-0002-5856-9246 |
work_keys_str_mv | AT fariasvivekf universalreinforcementlearning AT moallemiciamacc universalreinforcementlearning AT vanroybenjamin universalreinforcementlearning AT weissmantsachy universalreinforcementlearning |