Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed rewa...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-02-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/4/1393 |
_version_ | 1827652905503555584 |
---|---|
author | Mohammad Salimibeni Arash Mohammadi Parvin Malekzadeh Konstantinos N. Plataniotis |
author_facet | Mohammad Salimibeni Arash Mohammadi Parvin Malekzadeh Konstantinos N. Plataniotis |
author_sort | Mohammad Salimibeni |
collection | DOAJ |
description | Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed reward model for learning the underlying value function. While Deep Neural Network (DNN)-based solutions perform well, they are still prone to overfitting, high sensitivity to parameter selection, and sample inefficiency. In this paper, an adaptive Kalman Filter (KF)-based framework is introduced as an efficient alternative to address the aforementioned problems by capitalizing on unique characteristics of KF such as uncertainty modeling and online second order learning. More specifically, the paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its Successor Representation-based variant, referred to as the MAK-SR. The proposed MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments and exploit Kalman Temporal Difference (KTD) to address the parameter uncertainty. The proposed MAK-TD/SR frameworks are evaluated via several experiments, which are implemented through the OpenAI Gym MARL benchmarks. In these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-competitive) scenarios are utilized. The experimental results illustrate superior performance of the proposed MAK-TD/SR frameworks compared to their state-of-the-art counterparts. |
first_indexed | 2024-03-09T21:07:36Z |
format | Article |
id | doaj.art-e684e00f9d1a449e9bc158b90317e6ae |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T21:07:36Z |
publishDate | 2022-02-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-e684e00f9d1a449e9bc158b90317e6ae2023-11-23T21:58:47ZengMDPI AGSensors1424-82202022-02-01224139310.3390/s22041393Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor RepresentationMohammad Salimibeni0Arash Mohammadi1Parvin Malekzadeh2Konstantinos N. Plataniotis3Concordia Institute for Information System Engineering, Concordia University, Montreal, QC H3G 1M8, CanadaConcordia Institute for Information System Engineering, Concordia University, Montreal, QC H3G 1M8, CanadaDepartment of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G8, CanadaDepartment of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G8, CanadaDevelopment of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed reward model for learning the underlying value function. While Deep Neural Network (DNN)-based solutions perform well, they are still prone to overfitting, high sensitivity to parameter selection, and sample inefficiency. In this paper, an adaptive Kalman Filter (KF)-based framework is introduced as an efficient alternative to address the aforementioned problems by capitalizing on unique characteristics of KF such as uncertainty modeling and online second order learning. More specifically, the paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its Successor Representation-based variant, referred to as the MAK-SR. The proposed MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments and exploit Kalman Temporal Difference (KTD) to address the parameter uncertainty. The proposed MAK-TD/SR frameworks are evaluated via several experiments, which are implemented through the OpenAI Gym MARL benchmarks. In these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-competitive) scenarios are utilized. The experimental results illustrate superior performance of the proposed MAK-TD/SR frameworks compared to their state-of-the-art counterparts.https://www.mdpi.com/1424-8220/22/4/1393Kalman Temporal DifferenceMultiple Model Adaptive EstimationMulti-Agent Reinforcement LearningSuccessor Representation |
spellingShingle | Mohammad Salimibeni Arash Mohammadi Parvin Malekzadeh Konstantinos N. Plataniotis Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation Sensors Kalman Temporal Difference Multiple Model Adaptive Estimation Multi-Agent Reinforcement Learning Successor Representation |
title | Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation |
title_full | Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation |
title_fullStr | Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation |
title_full_unstemmed | Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation |
title_short | Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation |
title_sort | multi agent reinforcement learning via adaptive kalman temporal difference and successor representation |
topic | Kalman Temporal Difference Multiple Model Adaptive Estimation Multi-Agent Reinforcement Learning Successor Representation |
url | https://www.mdpi.com/1424-8220/22/4/1393 |
work_keys_str_mv | AT mohammadsalimibeni multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation AT arashmohammadi multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation AT parvinmalekzadeh multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation AT konstantinosnplataniotis multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation |