On the adversarial robustness of Bayesian machine learning models
Bayesian machine learning (ML) models have long been advocated as an important tool for safe artificial intelligence. Yet, little is known about their vulnerability against adversarial attacks. Such attacks aim to cause undesired model behaviour (e.g. misclassification) by crafting small perturbati...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Language: | English |
Published: |
2021
|
Subjects: |