Probabilistic safety for bayesian neural networks
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, T ⊆ R m, we study the probability w.r.t. the BNN posterior that all the points in T are mapped to the same region S in the output space. In particular, this c...
Päätekijät: | Wicker, M, Laurenti, L, Patane, A, Kwiatkowska, M |
---|---|
Aineistotyyppi: | Conference item |
Kieli: | English |
Julkaistu: |
Journal of Machine Learning Research
2020
|
Samankaltaisia teoksia
-
Probabilistic reach-avoid for Bayesian neural networks
Tekijä: Wicker, M, et al.
Julkaistu: (2024) -
Adversarial robustness certification for Bayesian neural networks
Tekijä: Wicker, M, et al.
Julkaistu: (2024) -
Statistical guarantees for the robustness of Bayesian neural networks
Tekijä: Cardelli, L, et al.
Julkaistu: (2019) -
Certification of iterative predictions in Bayesian neural networks
Tekijä: Wicker, M, et al.
Julkaistu: (2021) -
Robustness of Bayesian neural networks to gradient-based attacks
Tekijä: Carbone, G, et al.
Julkaistu: (2020)