Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning

IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to ch...

Full description

Bibliographic Details
Main Authors: Everett, Michael, Lutjens, Bjorn, How, Jonathan P
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/134088
_version_ 1826205621328805888
author Everett, Michael
Lutjens, Bjorn
How, Jonathan P
author_facet Everett, Michael
Lutjens, Bjorn
How, Jonathan P
author_sort Everett, Michael
collection MIT
description IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst case deviation in input space due to possible adversaries or noise. Moreover, the resulting policy comes with a certificate of solution quality, even though the true state and optimal action are unknown to the certifier due to the perturbations. The approach is demonstrated on a deep Q-network (DQN) policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios, a classic control task, and Atari Pong. This article extends our prior work with new performance guarantees, extensions to other reinforcement learning algorithms, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm.
first_indexed 2024-09-23T13:15:59Z
format Article
id mit-1721.1/134088
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:15:59Z
publishDate 2021
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1340882021-10-28T03:54:38Z Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning Everett, Michael Lutjens, Bjorn How, Jonathan P IEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst case deviation in input space due to possible adversaries or noise. Moreover, the resulting policy comes with a certificate of solution quality, even though the true state and optimal action are unknown to the certifier due to the perturbations. The approach is demonstrated on a deep Q-network (DQN) policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios, a classic control task, and Atari Pong. This article extends our prior work with new performance guarantees, extensions to other reinforcement learning algorithms, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm. 2021-10-27T19:58:02Z 2021-10-27T19:58:02Z 2021 2021-04-30T16:21:24Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/134088 en 10.1109/TNNLS.2021.3056046 IEEE Transactions on Neural Networks Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv
spellingShingle Everett, Michael
Lutjens, Bjorn
How, Jonathan P
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
title Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
title_full Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
title_fullStr Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
title_full_unstemmed Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
title_short Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
title_sort certifiable robustness to adversarial state uncertainty in deep reinforcement learning
url https://hdl.handle.net/1721.1/134088
work_keys_str_mv AT everettmichael certifiablerobustnesstoadversarialstateuncertaintyindeepreinforcementlearning
AT lutjensbjorn certifiablerobustnesstoadversarialstateuncertaintyindeepreinforcementlearning
AT howjonathanp certifiablerobustnesstoadversarialstateuncertaintyindeepreinforcementlearning