Safety verification for deep neural networks with provable guarantees
Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describe...
Main Author: | Kwiatkowska, M |
---|---|
Format: | Conference item |
Published: |
Leibniz International Proceedings in Informatics, LIPIcs
2019
|
Similar Items
-
A game-based approximate verification of deep neural networks with provable guarantees
by: Wu, M, et al.
Published: (2019) -
Reachability analysis of deep neural networks with provable guarantees
by: Ruan, W, et al.
Published: (2018) -
Safety and robustness for deep learning with provable guarantees (keynote)
by: Kwiatkowska, M
Published: (2019) -
Safety and robustness for deep learning with provable guarantees (invited paper - keynote)
by: Kwiatkowska, M
Published: (2020) -
Global robustness evaluation of deep neural networks with provable guarantees for the Hamming distance
by: Ruan, W, et al.
Published: (2019)