Verifying reinforcement learning up to infinity

Formally verifying that reinforcement learning systems act safely is increasingly important, but existing methods only verify over finite time. This is of limited use for dynamical systems that run indefinitely. We introduce the first method for verifying the time-unbounded safety of neural networks...

Full description

Bibliographic Details
Main Authors: Bacci, E, Giacobbe, M, Parker, D
Format: Conference item
Language:English
Published: International Joint Conferences on Artificial Intelligence Organization 2021
_version_ 1797102873232801792
author Bacci, E
Giacobbe, M
Parker, D
author_facet Bacci, E
Giacobbe, M
Parker, D
author_sort Bacci, E
collection OXFORD
description Formally verifying that reinforcement learning systems act safely is increasingly important, but existing methods only verify over finite time. This is of limited use for dynamical systems that run indefinitely. We introduce the first method for verifying the time-unbounded safety of neural networks controlling dynamical systems. We develop a novel abstract interpretation method which, by constructing adaptable template-based polyhedra using MILP and interval arithmetic, yields sound - safe and invariant - overapproximations of the reach set. This provides stronger safety guarantees than previous time-bounded methods and shows whether the agent has generalised beyond the length of its training episodes. Our method supports ReLU activation functions and systems with linear, piecewise linear and non-linear dynamics defined with polynomial and transcendental functions. We demonstrate its efficacy on a range of benchmark control problems.
first_indexed 2024-03-07T06:11:59Z
format Conference item
id oxford-uuid:efd5cd3f-50a0-4d9b-8659-7b751d614932
institution University of Oxford
language English
last_indexed 2024-03-07T06:11:59Z
publishDate 2021
publisher International Joint Conferences on Artificial Intelligence Organization
record_format dspace
spelling oxford-uuid:efd5cd3f-50a0-4d9b-8659-7b751d6149322022-03-27T11:43:09ZVerifying reinforcement learning up to infinityConference itemhttp://purl.org/coar/resource_type/c_5794uuid:efd5cd3f-50a0-4d9b-8659-7b751d614932EnglishSymplectic ElementsInternational Joint Conferences on Artificial Intelligence Organization2021Bacci, EGiacobbe, MParker, DFormally verifying that reinforcement learning systems act safely is increasingly important, but existing methods only verify over finite time. This is of limited use for dynamical systems that run indefinitely. We introduce the first method for verifying the time-unbounded safety of neural networks controlling dynamical systems. We develop a novel abstract interpretation method which, by constructing adaptable template-based polyhedra using MILP and interval arithmetic, yields sound - safe and invariant - overapproximations of the reach set. This provides stronger safety guarantees than previous time-bounded methods and shows whether the agent has generalised beyond the length of its training episodes. Our method supports ReLU activation functions and systems with linear, piecewise linear and non-linear dynamics defined with polynomial and transcendental functions. We demonstrate its efficacy on a range of benchmark control problems.
spellingShingle Bacci, E
Giacobbe, M
Parker, D
Verifying reinforcement learning up to infinity
title Verifying reinforcement learning up to infinity
title_full Verifying reinforcement learning up to infinity
title_fullStr Verifying reinforcement learning up to infinity
title_full_unstemmed Verifying reinforcement learning up to infinity
title_short Verifying reinforcement learning up to infinity
title_sort verifying reinforcement learning up to infinity
work_keys_str_mv AT baccie verifyingreinforcementlearninguptoinfinity
AT giacobbem verifyingreinforcementlearninguptoinfinity
AT parkerd verifyingreinforcementlearninguptoinfinity