Safety verification for deep neural networks with provable guarantees

Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describe...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριος συγγραφέας: Kwiatkowska, M
Μορφή: Conference item
Έκδοση: Leibniz International Proceedings in Informatics, LIPIcs 2019