Refined Continuous Control of DDPG Actors via Parametrised Activation

Continuous action spaces impose a serious challenge for reinforcement learning agents. While several off-policy reinforcement learning algorithms provide a universal solution to continuous control problems, the real challenge lies in the fact that different actuators feature different response funct...

Full description

Bibliographic Details
Main Authors: Mohammed Hossny, Julie Iskander, Mohamed Attia, Khaled Saleh, Ahmed Abobakr
Format: Article
Language:English
Published: MDPI AG 2021-09-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/2/4/29
_version_ 1797507035023015936
author Mohammed Hossny
Julie Iskander
Mohamed Attia
Khaled Saleh
Ahmed Abobakr
author_facet Mohammed Hossny
Julie Iskander
Mohamed Attia
Khaled Saleh
Ahmed Abobakr
author_sort Mohammed Hossny
collection DOAJ
description Continuous action spaces impose a serious challenge for reinforcement learning agents. While several off-policy reinforcement learning algorithms provide a universal solution to continuous control problems, the real challenge lies in the fact that different actuators feature different response functions due to wear and tear (in mechanical systems) and fatigue (in biomechanical systems). In this paper, we propose enhancing the actor-critic reinforcement learning agents by parameterising the final layer in the actor network. This layer produces the actions to accommodate the behaviour discrepancy of different actuators under different load conditions during interaction with the environment. To achieve this, the actor is trained to learn the tuning parameter controlling the activation layer (e.g., Tanh and Sigmoid). The learned parameters are then used to create tailored activation functions for each actuator. We ran experiments on three OpenAI Gym environments, i.e., Pendulum-v0, LunarLanderContinuous-v2, and BipedalWalker-v2. Results showed an average of 23.15% and 33.80% increase in total episode reward of the LunarLanderContinuous-v2 and BipedalWalker-v2 environments, respectively. There was no apparent improvement in Pendulum-v0 environment but the proposed method produces a more stable actuation signal compared to the state-of-the-art method. The proposed method allows the reinforcement learning actor to produce more robust actions that accommodate the discrepancy in the actuators’ response functions. This is particularly useful for real life scenarios where actuators exhibit different response functions depending on the load and the interaction with the environment. This also simplifies the transfer learning problem by fine-tuning the parameterised activation layers instead of retraining the entire policy every time an actuator is replaced. Finally, the proposed method would allow better accommodation to biological actuators (e.g., muscles) in biomechanical systems.
first_indexed 2024-03-10T04:40:57Z
format Article
id doaj.art-d27ff23d5bd24d7ab09597dbd0d25750
institution Directory Open Access Journal
issn 2673-2688
language English
last_indexed 2024-03-10T04:40:57Z
publishDate 2021-09-01
publisher MDPI AG
record_format Article
series AI
spelling doaj.art-d27ff23d5bd24d7ab09597dbd0d257502023-11-23T03:24:24ZengMDPI AGAI2673-26882021-09-012446447610.3390/ai2040029Refined Continuous Control of DDPG Actors via Parametrised ActivationMohammed Hossny0Julie Iskander1Mohamed Attia2Khaled Saleh3Ahmed Abobakr4School of Engineering and IT, University of New South Wales, Canberra, ACT 2612, AustraliaWalter and Eliza Hall Institute of Medical Research, Melbourne, VIC 3052, AustraliaMedical Research Institute, Alexandria University, Alexandria 21568, EgyptFaculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, AustraliaFaculty of Computers and Artificial Intelligence, Cairo University, Cairo 12613, EgyptContinuous action spaces impose a serious challenge for reinforcement learning agents. While several off-policy reinforcement learning algorithms provide a universal solution to continuous control problems, the real challenge lies in the fact that different actuators feature different response functions due to wear and tear (in mechanical systems) and fatigue (in biomechanical systems). In this paper, we propose enhancing the actor-critic reinforcement learning agents by parameterising the final layer in the actor network. This layer produces the actions to accommodate the behaviour discrepancy of different actuators under different load conditions during interaction with the environment. To achieve this, the actor is trained to learn the tuning parameter controlling the activation layer (e.g., Tanh and Sigmoid). The learned parameters are then used to create tailored activation functions for each actuator. We ran experiments on three OpenAI Gym environments, i.e., Pendulum-v0, LunarLanderContinuous-v2, and BipedalWalker-v2. Results showed an average of 23.15% and 33.80% increase in total episode reward of the LunarLanderContinuous-v2 and BipedalWalker-v2 environments, respectively. There was no apparent improvement in Pendulum-v0 environment but the proposed method produces a more stable actuation signal compared to the state-of-the-art method. The proposed method allows the reinforcement learning actor to produce more robust actions that accommodate the discrepancy in the actuators’ response functions. This is particularly useful for real life scenarios where actuators exhibit different response functions depending on the load and the interaction with the environment. This also simplifies the transfer learning problem by fine-tuning the parameterised activation layers instead of retraining the entire policy every time an actuator is replaced. Finally, the proposed method would allow better accommodation to biological actuators (e.g., muscles) in biomechanical systems.https://www.mdpi.com/2673-2688/2/4/29continuous controldeep reinforcement learningactor-criticDDPG
spellingShingle Mohammed Hossny
Julie Iskander
Mohamed Attia
Khaled Saleh
Ahmed Abobakr
Refined Continuous Control of DDPG Actors via Parametrised Activation
AI
continuous control
deep reinforcement learning
actor-critic
DDPG
title Refined Continuous Control of DDPG Actors via Parametrised Activation
title_full Refined Continuous Control of DDPG Actors via Parametrised Activation
title_fullStr Refined Continuous Control of DDPG Actors via Parametrised Activation
title_full_unstemmed Refined Continuous Control of DDPG Actors via Parametrised Activation
title_short Refined Continuous Control of DDPG Actors via Parametrised Activation
title_sort refined continuous control of ddpg actors via parametrised activation
topic continuous control
deep reinforcement learning
actor-critic
DDPG
url https://www.mdpi.com/2673-2688/2/4/29
work_keys_str_mv AT mohammedhossny refinedcontinuouscontrolofddpgactorsviaparametrisedactivation
AT julieiskander refinedcontinuouscontrolofddpgactorsviaparametrisedactivation
AT mohamedattia refinedcontinuouscontrolofddpgactorsviaparametrisedactivation
AT khaledsaleh refinedcontinuouscontrolofddpgactorsviaparametrisedactivation
AT ahmedabobakr refinedcontinuouscontrolofddpgactorsviaparametrisedactivation