Robustness of Bayesian neural networks to gradient-based attacks
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-dat...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Neural Information Processing Systems Foundation, Inc.
2020
|