A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning

Although recent years witnessed notable success for a cooperative setting in multi-agent reinforcement learning (MARL), efficient explorations are still challenging primarily due to the complex dynamics of inter-agent interactions constituting the high dimension of action spaces. For an efficient ex...

Descrición completa

Detalles Bibliográficos
Main Authors: Byunghyun Yoo, Devarani Devi Ningombam, Sungwon Yi, Hyun Woo Kim, Euisok Chung, Ran Han, Hwa Jeon Song
Formato: Artigo
Idioma:English
Publicado: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Acceso en liña:https://ieeexplore.ieee.org/document/9764683/
_version_ 1828150075548762112
author Byunghyun Yoo
Devarani Devi Ningombam
Sungwon Yi
Hyun Woo Kim
Euisok Chung
Ran Han
Hwa Jeon Song
author_facet Byunghyun Yoo
Devarani Devi Ningombam
Sungwon Yi
Hyun Woo Kim
Euisok Chung
Ran Han
Hwa Jeon Song
author_sort Byunghyun Yoo
collection DOAJ
description Although recent years witnessed notable success for a cooperative setting in multi-agent reinforcement learning (MARL), efficient explorations are still challenging primarily due to the complex dynamics of inter-agent interactions constituting the high dimension of action spaces. For an efficient exploration, it is necessary to quantify influences that can represent interactions among agents and use them to obtain more information about the complexity of multi-agent systems. In this paper, we propose a novel influence-seeking exploration (ISE) scheme, which encourages agents to preferably explore action spaces significantly influenced by others and thus helps in speeding up the learning curve. To measure the influence of other agents in action selection, we use the variance of joint action-values with different action sets of agents that obtained by an estimation technique to lessen computation overhead. To this end, we first present an analytical approach inspired by the concept of approximated variance propagation and then apply it to an exploration scheme. We evaluate the proposed exploration method on a set of StarCraft II micromanagement as well as modified predator-prey tasks. Compared to state-of-the-art methods, the proposed method achieved performance improvements of 10% in StarCraft II micromanagement and 50% in modified predator-prey tasks approximately.
first_indexed 2024-04-11T21:39:09Z
format Article
id doaj.art-2c52b19cd0694122b162961de7740579
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-11T21:39:09Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-2c52b19cd0694122b162961de77405792022-12-22T04:01:39ZengIEEEIEEE Access2169-35362022-01-0110477414775310.1109/ACCESS.2022.31710539764683A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement LearningByunghyun Yoo0https://orcid.org/0000-0003-0857-5565Devarani Devi Ningombam1Sungwon Yi2Hyun Woo Kim3Euisok Chung4Ran Han5Hwa Jeon Song6https://orcid.org/0000-0002-8216-4812Electronics and Telecommunications Research Institute (ETRI), Daejeon, South KoreaDepartment of Computer Science and Engineering, GITAM University, Visakhapatnam, IndiaElectronics and Telecommunications Research Institute (ETRI), Daejeon, South KoreaElectronics and Telecommunications Research Institute (ETRI), Daejeon, South KoreaElectronics and Telecommunications Research Institute (ETRI), Daejeon, South KoreaElectronics and Telecommunications Research Institute (ETRI), Daejeon, South KoreaElectronics and Telecommunications Research Institute (ETRI), Daejeon, South KoreaAlthough recent years witnessed notable success for a cooperative setting in multi-agent reinforcement learning (MARL), efficient explorations are still challenging primarily due to the complex dynamics of inter-agent interactions constituting the high dimension of action spaces. For an efficient exploration, it is necessary to quantify influences that can represent interactions among agents and use them to obtain more information about the complexity of multi-agent systems. In this paper, we propose a novel influence-seeking exploration (ISE) scheme, which encourages agents to preferably explore action spaces significantly influenced by others and thus helps in speeding up the learning curve. To measure the influence of other agents in action selection, we use the variance of joint action-values with different action sets of agents that obtained by an estimation technique to lessen computation overhead. To this end, we first present an analytical approach inspired by the concept of approximated variance propagation and then apply it to an exploration scheme. We evaluate the proposed exploration method on a set of StarCraft II micromanagement as well as modified predator-prey tasks. Compared to state-of-the-art methods, the proposed method achieved performance improvements of 10% in StarCraft II micromanagement and 50% in modified predator-prey tasks approximately.https://ieeexplore.ieee.org/document/9764683/Multi-agent systemsreinforcement learningdeep learning
spellingShingle Byunghyun Yoo
Devarani Devi Ningombam
Sungwon Yi
Hyun Woo Kim
Euisok Chung
Ran Han
Hwa Jeon Song
A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
IEEE Access
Multi-agent systems
reinforcement learning
deep learning
title A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
title_full A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
title_fullStr A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
title_full_unstemmed A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
title_short A Novel and Efficient Influence-Seeking Exploration in Deep Multiagent Reinforcement Learning
title_sort novel and efficient influence seeking exploration in deep multiagent reinforcement learning
topic Multi-agent systems
reinforcement learning
deep learning
url https://ieeexplore.ieee.org/document/9764683/
work_keys_str_mv AT byunghyunyoo anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT devaranideviningombam anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT sungwonyi anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT hyunwookim anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT euisokchung anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT ranhan anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT hwajeonsong anovelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT byunghyunyoo novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT devaranideviningombam novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT sungwonyi novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT hyunwookim novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT euisokchung novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT ranhan novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning
AT hwajeonsong novelandefficientinfluenceseekingexplorationindeepmultiagentreinforcementlearning