Safety verification for deep neural networks with provable guarantees
Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describe...
Main Author: | |
---|---|
Format: | Conference item |
Published: |
Leibniz International Proceedings in Informatics, LIPIcs
2019
|
_version_ | 1797069912084054016 |
---|---|
author | Kwiatkowska, M |
author_facet | Kwiatkowska, M |
author_sort | Kwiatkowska, M |
collection | OXFORD |
description | Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods. |
first_indexed | 2024-03-06T22:31:23Z |
format | Conference item |
id | oxford-uuid:5866ee47-a875-4c93-bd89-1a9352bfe10f |
institution | University of Oxford |
last_indexed | 2024-03-06T22:31:23Z |
publishDate | 2019 |
publisher | Leibniz International Proceedings in Informatics, LIPIcs |
record_format | dspace |
spelling | oxford-uuid:5866ee47-a875-4c93-bd89-1a9352bfe10f2022-03-26T17:03:06ZSafety verification for deep neural networks with provable guaranteesConference itemhttp://purl.org/coar/resource_type/c_5794uuid:5866ee47-a875-4c93-bd89-1a9352bfe10fSymplectic Elements at OxfordLeibniz International Proceedings in Informatics, LIPIcs2019Kwiatkowska, MComputing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods. |
spellingShingle | Kwiatkowska, M Safety verification for deep neural networks with provable guarantees |
title | Safety verification for deep neural networks with provable guarantees |
title_full | Safety verification for deep neural networks with provable guarantees |
title_fullStr | Safety verification for deep neural networks with provable guarantees |
title_full_unstemmed | Safety verification for deep neural networks with provable guarantees |
title_short | Safety verification for deep neural networks with provable guarantees |
title_sort | safety verification for deep neural networks with provable guarantees |
work_keys_str_mv | AT kwiatkowskam safetyverificationfordeepneuralnetworkswithprovableguarantees |