Enhancing Efficiency in Hierarchical Reinforcement Learning through Topological-Sorted Potential Calculation

Hierarchical reinforcement learning (HRL) offers a hierarchical structure for organizing tasks, enabling agents to learn and make decisions autonomously in complex environments. However, traditional HRL approaches face limitations in effectively handling complex tasks. Reward machines, which specify...

Full description

Bibliographic Details
Main Authors: Ziyun Zhou, Jingwei Shang, Yimang Li
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/12/17/3700
Description
Summary:Hierarchical reinforcement learning (HRL) offers a hierarchical structure for organizing tasks, enabling agents to learn and make decisions autonomously in complex environments. However, traditional HRL approaches face limitations in effectively handling complex tasks. Reward machines, which specify high-level goals and associated rewards for sub-goals, have been introduced to address these limitations by facilitating the agent’s understanding and reasoning with respect to the task hierarchy. In this paper, we propose a novel approach to enhance HRL performance through topologically sorted potential calculation for reward machines. By leveraging the topological structure of the task hierarchy, our method efficiently determines potentials for different sub-goals. This topological sorting enables the agent to prioritize actions leading to the accomplishment of higher-level goals, enhancing the learning process. To assess the efficacy of our approach, we conducted experiments in the grid-world environment with OpenAI-Gym. The results showcase the superiority of our proposed method over traditional HRL techniques and reward machine-based reinforcement learning approaches in terms of learning efficiency and overall task performance.
ISSN:2079-9292