Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning

Reinforcement learning (RL) techniques, while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces or in environments with sparse rewards. The decomposition of tasks into a hierarchical structure holds the potential to significantly speed up learning, general...

Full description

Bibliographic Details
Main Authors: Behzad Ghazanfari, Fatemeh Afghah, Matthew E. Taylor
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8957114/
_version_ 1818664054427746304
author Behzad Ghazanfari
Fatemeh Afghah
Matthew E. Taylor
author_facet Behzad Ghazanfari
Fatemeh Afghah
Matthew E. Taylor
author_sort Behzad Ghazanfari
collection DOAJ
description Reinforcement learning (RL) techniques, while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces or in environments with sparse rewards. The decomposition of tasks into a hierarchical structure holds the potential to significantly speed up learning, generalization, and transfer learning. However, the current task decomposition techniques often cannot extract hierarchical task structures without relying on high-level knowledge provided by an expert (e.g., using dynamic Bayesian networks (DBNs) in factored Markov decision processes), which is not necessarily available in autonomous systems. In this paper, we propose a novel method based on Sequential Association Rule Mining that can extract Hierarchical Structure of Tasks in Reinforcement Learning (SARM-HSTRL) in an autonomous manner for both Markov decision processes (MDPs) and factored MDPs. The proposed method leverages association rule mining to discover the causal and temporal relationships among states in different trajectories and extracts a task hierarchy that captures these relationships among sub-goals as termination conditions of different sub-tasks. We prove that the extracted hierarchical policy offers a hierarchically optimal policy in MDPs and factored MDPs. It should be noted that SARM-HSTRL extracts this hierarchical optimal policy without having dynamic Bayesian networks in scenarios with a single task trajectory and also with multiple tasks' trajectories. Furthermore, we show theoretically and empirically that the extracted hierarchical task structure is consistent with trajectories and provides the most efficient, reliable, and compact structure under appropriate assumptions. The numerical results compare the performance of the proposed SARM-HSTRL method with conventional HRL algorithms in terms of the accuracy in detecting the sub-goals, the validity of the extracted hierarchies, and the speed of learning in several testbeds. The key capabilities of SARM-HSTRL including handling multiple tasks and autonomous hierarchical task extraction can lead to the application of this HRL method in reusing, transferring, and generalization of knowledge in different domains.
first_indexed 2024-12-17T05:26:38Z
format Article
id doaj.art-c710129a2f0c4075844e14b00106e94c
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-17T05:26:38Z
publishDate 2020-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-c710129a2f0c4075844e14b00106e94c2022-12-21T22:01:51ZengIEEEIEEE Access2169-35362020-01-018117821179910.1109/ACCESS.2020.29659308957114Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement LearningBehzad Ghazanfari0https://orcid.org/0000-0003-3004-0823Fatemeh Afghah1https://orcid.org/0000-0002-2315-1173Matthew E. Taylor2https://orcid.org/0000-0001-8946-0211School of Informatics, Computing, and Cyber Security, Northern Arizona University, Flagstaff, AZ, USASchool of Informatics, Computing, and Cyber Security, Northern Arizona University, Flagstaff, AZ, USASchool of Electrical Engineering and Computer Science, Washington State University, Pullman, WA, USAReinforcement learning (RL) techniques, while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces or in environments with sparse rewards. The decomposition of tasks into a hierarchical structure holds the potential to significantly speed up learning, generalization, and transfer learning. However, the current task decomposition techniques often cannot extract hierarchical task structures without relying on high-level knowledge provided by an expert (e.g., using dynamic Bayesian networks (DBNs) in factored Markov decision processes), which is not necessarily available in autonomous systems. In this paper, we propose a novel method based on Sequential Association Rule Mining that can extract Hierarchical Structure of Tasks in Reinforcement Learning (SARM-HSTRL) in an autonomous manner for both Markov decision processes (MDPs) and factored MDPs. The proposed method leverages association rule mining to discover the causal and temporal relationships among states in different trajectories and extracts a task hierarchy that captures these relationships among sub-goals as termination conditions of different sub-tasks. We prove that the extracted hierarchical policy offers a hierarchically optimal policy in MDPs and factored MDPs. It should be noted that SARM-HSTRL extracts this hierarchical optimal policy without having dynamic Bayesian networks in scenarios with a single task trajectory and also with multiple tasks' trajectories. Furthermore, we show theoretically and empirically that the extracted hierarchical task structure is consistent with trajectories and provides the most efficient, reliable, and compact structure under appropriate assumptions. The numerical results compare the performance of the proposed SARM-HSTRL method with conventional HRL algorithms in terms of the accuracy in detecting the sub-goals, the validity of the extracted hierarchies, and the speed of learning in several testbeds. The key capabilities of SARM-HSTRL including handling multiple tasks and autonomous hierarchical task extraction can lead to the application of this HRL method in reusing, transferring, and generalization of knowledge in different domains.https://ieeexplore.ieee.org/document/8957114/Association rule miningextracting task structurehierarchical reinforcement learning
spellingShingle Behzad Ghazanfari
Fatemeh Afghah
Matthew E. Taylor
Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning
IEEE Access
Association rule mining
extracting task structure
hierarchical reinforcement learning
title Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning
title_full Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning
title_fullStr Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning
title_full_unstemmed Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning
title_short Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning
title_sort sequential association rule mining for autonomously extracting hierarchical task structures in reinforcement learning
topic Association rule mining
extracting task structure
hierarchical reinforcement learning
url https://ieeexplore.ieee.org/document/8957114/
work_keys_str_mv AT behzadghazanfari sequentialassociationruleminingforautonomouslyextractinghierarchicaltaskstructuresinreinforcementlearning
AT fatemehafghah sequentialassociationruleminingforautonomouslyextractinghierarchicaltaskstructuresinreinforcementlearning
AT matthewetaylor sequentialassociationruleminingforautonomouslyextractinghierarchicaltaskstructuresinreinforcementlearning