A Novel Method for Improving the Training Efficiency of Deep Multi-Agent Reinforcement Learning

Deep reinforcement learning (RL) holds considerable promise to help address a variety of multi-agent problems in a dynamic and complex environment. In multi-agent scenarios, most tasks require multiple agents to cooperate and the number of agents has a negative impact on the training efficiency of r...

Full description

Bibliographic Details
Main Authors: Yaozong Pan, Haiyang Jiang, Haitao Yang, Jian Zhang
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8845580/
Description
Summary:Deep reinforcement learning (RL) holds considerable promise to help address a variety of multi-agent problems in a dynamic and complex environment. In multi-agent scenarios, most tasks require multiple agents to cooperate and the number of agents has a negative impact on the training efficiency of reinforcement learning. To this end, we propose a novel method, which uses the framework of centralized training and distributed execution and uses parameter sharing among homogeneous agents to replace partial calculation of network parameters in policy evolution. The parameter asynchronous sharing mechanism and the soft sharing mechanism are used to balance the exploratory of agents and the consistency of homogenous agents’ policy. We experimentally validated our approach in different types of multi-agent scenarios. The empirical results show that our method can significantly promote training efficiency in collaborative tasks, competitive tasks, and mixed tasks without affecting the performance.
ISSN:2169-3536