Probabilistic safety for bayesian neural networks
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, T ⊆ R m, we study the probability w.r.t. the BNN posterior that all the points in T are mapped to the same region S in the output space. In particular, this c...
Hlavní autoři: | Wicker, M, Laurenti, L, Patane, A, Kwiatkowska, M |
---|---|
Médium: | Conference item |
Jazyk: | English |
Vydáno: |
Journal of Machine Learning Research
2020
|
Podobné jednotky
-
Probabilistic reach-avoid for Bayesian neural networks
Autor: Wicker, M, a další
Vydáno: (2024) -
Adversarial robustness certification for Bayesian neural networks
Autor: Wicker, M, a další
Vydáno: (2024) -
Statistical guarantees for the robustness of Bayesian neural networks
Autor: Cardelli, L, a další
Vydáno: (2019) -
Certification of iterative predictions in Bayesian neural networks
Autor: Wicker, M, a další
Vydáno: (2021) -
Robustness of Bayesian neural networks to gradient-based attacks
Autor: Carbone, G, a další
Vydáno: (2020)