Probabilistic safety for bayesian neural networks
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, T ⊆ R m, we study the probability w.r.t. the BNN posterior that all the points in T are mapped to the same region S in the output space. In particular, this c...
Autors principals: | , , , |
---|---|
Format: | Conference item |
Idioma: | English |
Publicat: |
Journal of Machine Learning Research
2020
|