A semi-independent policies training method with shared representation for heterogeneous multi-agents reinforcement learning
Humans do not learn everything from the scratch but can connect and associate the upcoming information with the exchanged experience and known knowledge. Such an idea can be extended to cooperated multi-reinforcement learning and has achieved its success on homogeneous agents by means of parameter s...
Main Authors: | Biao Zhao, Weiqiang Jin, Zhang Chen, Yucheng Guo |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-06-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnins.2023.1201370/full |
Similar Items
-
A Novel Method for Improving the Training Efficiency of Deep Multi-Agent Reinforcement Learning
by: Yaozong Pan, et al.
Published: (2019-01-01) -
Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks
by: Daming Shi, et al.
Published: (2022-03-01) -
Multi-agent reinforcement learning for edge information sharing in vehicular networks
by: Ruyan Wang, et al.
Published: (2022-06-01) -
Unpredictable Sharing Economy: Barbaric Growth, Deceptive Representation and Divergent Regulation
by: Xie Xinshui, et al.
Published: (2017-11-01) -
Weighted Opinion Sharing Model for Cutting Link and Changing Information among Agents as Dynamic Environment
by: Fumito Uwano, et al.
Published: (2018-07-01)