Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to ch...
Main Authors: | Everett, Michael, Lutjens, Bjorn, How, Jonathan P |
---|---|
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2021
|
Online Access: | https://hdl.handle.net/1721.1/134088 |
Similar Items
-
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
by: Everett, Michael F, et al.
Published: (2021) -
Safe Reinforcement Learning With Model Uncertainty Estimates
by: Lutjens, Bjorn, et al.
Published: (2020) -
Adversarial robustness of deep reinforcement learning
by: Qu, Xinghua
Published: (2022) -
Trustworthiness and certified robustness for deep learning
by: Xia, Song
Published: (2022) -
Active perception in adversarial scenarios using maximum entropy deep reinforcement learning
by: Shen, Macheng, et al.
Published: (2021)