Learning State-Specific Action Masks for Reinforcement Learning

Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental ac...

Full description

Bibliographic Details
Main Authors: Ziyi Wang, Xinran Li, Luoyang Sun, Haifeng Zhang, Hualin Liu, Jun Wang
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/17/2/60
_version_ 1797299168620838912
author Ziyi Wang
Xinran Li
Luoyang Sun
Haifeng Zhang
Hualin Liu
Jun Wang
author_facet Ziyi Wang
Xinran Li
Luoyang Sun
Haifeng Zhang
Hualin Liu
Jun Wang
author_sort Ziyi Wang
collection DOAJ
description Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula>RTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach.
first_indexed 2024-03-07T22:45:47Z
format Article
id doaj.art-99937593c1314cebacb8dfc93b9c358f
institution Directory Open Access Journal
issn 1999-4893
language English
last_indexed 2024-03-07T22:45:47Z
publishDate 2024-01-01
publisher MDPI AG
record_format Article
series Algorithms
spelling doaj.art-99937593c1314cebacb8dfc93b9c358f2024-02-23T15:04:27ZengMDPI AGAlgorithms1999-48932024-01-011726010.3390/a17020060Learning State-Specific Action Masks for Reinforcement LearningZiyi Wang0Xinran Li1Luoyang Sun2Haifeng Zhang3Hualin Liu4Jun Wang5Institute of Automation, Chinese Academy of Sciences, Beijing 100190, ChinaInstitute of Automation, Chinese Academy of Sciences, Beijing 100190, ChinaInstitute of Automation, Chinese Academy of Sciences, Beijing 100190, ChinaInstitute of Automation, Chinese Academy of Sciences, Beijing 100190, ChinaKey Laboratory of Oil & Gas Business Chain Optimization, Petrochina Planning and Engineering Institute, CNPC, Beijing 100083, ChinaComputer Science, University College London, London WC1E 6BT, UKEfficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula>RTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach.https://www.mdpi.com/1999-4893/17/2/60reinforcement learningexploration efficiencyspace reduction
spellingShingle Ziyi Wang
Xinran Li
Luoyang Sun
Haifeng Zhang
Hualin Liu
Jun Wang
Learning State-Specific Action Masks for Reinforcement Learning
Algorithms
reinforcement learning
exploration efficiency
space reduction
title Learning State-Specific Action Masks for Reinforcement Learning
title_full Learning State-Specific Action Masks for Reinforcement Learning
title_fullStr Learning State-Specific Action Masks for Reinforcement Learning
title_full_unstemmed Learning State-Specific Action Masks for Reinforcement Learning
title_short Learning State-Specific Action Masks for Reinforcement Learning
title_sort learning state specific action masks for reinforcement learning
topic reinforcement learning
exploration efficiency
space reduction
url https://www.mdpi.com/1999-4893/17/2/60
work_keys_str_mv AT ziyiwang learningstatespecificactionmasksforreinforcementlearning
AT xinranli learningstatespecificactionmasksforreinforcementlearning
AT luoyangsun learningstatespecificactionmasksforreinforcementlearning
AT haifengzhang learningstatespecificactionmasksforreinforcementlearning
AT hualinliu learningstatespecificactionmasksforreinforcementlearning
AT junwang learningstatespecificactionmasksforreinforcementlearning