On Reinforcement Learning for Turn-based Zero-sum Markov Games

© 2020 Owner/Author. We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines "exp...

Full description

Bibliographic Details
Main Authors: Shah, D, Somani, V, Xie, Q, Xu, Z
Other Authors: Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Format: Article
Language:English
Published: ACM 2021
Online Access:https://hdl.handle.net/1721.1/137142
_version_ 1811074775940857856
author Shah, D
Somani, V
Xie, Q
Xu, Z
author2 Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
author_facet Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Shah, D
Somani, V
Xie, Q
Xu, Z
author_sort Shah, D
collection MIT
description © 2020 Owner/Author. We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines "exploration", "policy improvement"and "supervised learning"to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for "exploration", Monte-Carlo Tree Search is used for "policy improvement"and Nearest Neighbors is used for "supervised learning", we establish that this method finds an\varepsilon-approximate value function of Nash equilibrium in\widetildeO(\varepsilon^-(d+4)) steps when the underlying state-space of the game is continuous and d-dimensional. This is nearly optimal as we establish a lower bound of\widetildeØmega (\varepsilon^-(d+2)) for any policy.
first_indexed 2024-09-23T09:55:18Z
format Article
id mit-1721.1/137142
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T09:55:18Z
publishDate 2021
publisher ACM
record_format dspace
spelling mit-1721.1/1371422023-04-07T19:46:41Z On Reinforcement Learning for Turn-based Zero-sum Markov Games Shah, D Somani, V Xie, Q Xu, Z Massachusetts Institute of Technology. Laboratory for Information and Decision Systems © 2020 Owner/Author. We consider the problem of finding Nash equilibrium for two-player turn-based zero-sum games. Inspired by the AlphaGo Zero (AGZ) algorithm, we develop a Reinforcement Learning based approach. Specifically, we propose Explore-Improve-Supervise (EIS) method that combines "exploration", "policy improvement"and "supervised learning"to find the value function and policy associated with Nash equilibrium. We identify sufficient conditions for convergence and correctness for such an approach. For a concrete instance of EIS where random policy is used for "exploration", Monte-Carlo Tree Search is used for "policy improvement"and Nearest Neighbors is used for "supervised learning", we establish that this method finds an\varepsilon-approximate value function of Nash equilibrium in\widetildeO(\varepsilon^-(d+4)) steps when the underlying state-space of the game is continuous and d-dimensional. This is nearly optimal as we establish a lower bound of\widetildeØmega (\varepsilon^-(d+2)) for any policy. 2021-11-02T17:41:39Z 2021-11-02T17:41:39Z 2020 2021-06-25T12:46:49Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137142 Shah, D, Somani, V, Xie, Q and Xu, Z. 2020. "On Reinforcement Learning for Turn-based Zero-sum Markov Games." FODS 2020 - Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference. en 10.1145/3412815.3416888 FODS 2020 - Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf ACM ACM
spellingShingle Shah, D
Somani, V
Xie, Q
Xu, Z
On Reinforcement Learning for Turn-based Zero-sum Markov Games
title On Reinforcement Learning for Turn-based Zero-sum Markov Games
title_full On Reinforcement Learning for Turn-based Zero-sum Markov Games
title_fullStr On Reinforcement Learning for Turn-based Zero-sum Markov Games
title_full_unstemmed On Reinforcement Learning for Turn-based Zero-sum Markov Games
title_short On Reinforcement Learning for Turn-based Zero-sum Markov Games
title_sort on reinforcement learning for turn based zero sum markov games
url https://hdl.handle.net/1721.1/137142
work_keys_str_mv AT shahd onreinforcementlearningforturnbasedzerosummarkovgames
AT somaniv onreinforcementlearningforturnbasedzerosummarkovgames
AT xieq onreinforcementlearningforturnbasedzerosummarkovgames
AT xuz onreinforcementlearningforturnbasedzerosummarkovgames