Probabilistic safety for bayesian neural networks
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, T ⊆ R m, we study the probability w.r.t. the BNN posterior that all the points in T are mapped to the same region S in the output space. In particular, this c...
Main Authors: | Wicker, M, Laurenti, L, Patane, A, Kwiatkowska, M |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Journal of Machine Learning Research
2020
|
Similar Items
-
Probabilistic reach-avoid for Bayesian neural networks
by: Wicker, M, et al.
Published: (2024) -
Adversarial robustness certification for Bayesian neural networks
by: Wicker, M, et al.
Published: (2024) -
Statistical guarantees for the robustness of Bayesian neural networks
by: Cardelli, L, et al.
Published: (2019) -
Certification of iterative predictions in Bayesian neural networks
by: Wicker, M, et al.
Published: (2021) -
Robustness of Bayesian neural networks to gradient-based attacks
by: Carbone, G, et al.
Published: (2020)