Probabilistic safety for bayesian neural networks
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, T ⊆ R m, we study the probability w.r.t. the BNN posterior that all the points in T are mapped to the same region S in the output space. In particular, this c...
Glavni autori: | Wicker, M, Laurenti, L, Patane, A, Kwiatkowska, M |
---|---|
Format: | Conference item |
Jezik: | English |
Izdano: |
Journal of Machine Learning Research
2020
|
Slični predmeti
-
Probabilistic reach-avoid for Bayesian neural networks
od: Wicker, M, i dr.
Izdano: (2024) -
Adversarial robustness certification for Bayesian neural networks
od: Wicker, M, i dr.
Izdano: (2024) -
Statistical guarantees for the robustness of Bayesian neural networks
od: Cardelli, L, i dr.
Izdano: (2019) -
Certification of iterative predictions in Bayesian neural networks
od: Wicker, M, i dr.
Izdano: (2021) -
Robustness of Bayesian neural networks to gradient-based attacks
od: Carbone, G, i dr.
Izdano: (2020)