Deep Adversarial Reinforcement Learning Method to Generate Control Policies Robust Against Worst-Case Value Predictions
Over the last decade, methods for autonomous control by artificial intelligence have been extensively developed based on deep reinforcement learning (DRL) technologies. However, despite these advances, robustness to noise in observation data remains as an issue in autonomous control policies impleme...
Main Authors: | Kohei Ohashi, Kosuke Nakanishi, Yuji Yasui, Shin Ishii |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10250423/ |
Similar Items
-
Deep Adversarial Reinforcement Learning With Noise Compensation by Autoencoder
by: Kohei Ohashi, et al.
Published: (2021-01-01) -
Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
by: William Villegas-Ch, et al.
Published: (2024-01-01) -
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
by: Xingjian Li, et al.
Published: (2022-01-01) -
Scaleable input gradient regularization for adversarial robustness
by: Chris Finlay, et al.
Published: (2021-03-01) -
Robust data-driven adversarial false data injection attack detection method with deep Q-network in power systems
by: Ran, Xiaohong, et al.
Published: (2024)