Adversarial robustness guarantees for classification with Gaussian Processes
We investigate adversarial robustness of Gaussian Process classification (GPC) models. Specifically, given a compact subset of the input space T⊆ℝd enclosing a test point x∗ and a GPC trained on a dataset , we aim to compute the minimum and the maximum classification probability for the GPC over al...
Main Authors: | Blaas, A, Patane, A, Laurenti, L, Cardelli, L, Kwiatkowska, M, Roberts, S |
---|---|
Format: | Conference item |
Sprog: | English |
Udgivet: |
Proceedings of Machine Learning Research
2020
|
Lignende værker
-
Adversarial robustness guarantees for Gaussian processes
af: Patane, A, et al.
Udgivet: (2022) -
Robustness guarantees for Bayesian inference with Gaussian processes
af: Cardelli, L, et al.
Udgivet: (2019) -
Safety guarantees for iterative predictions with Gaussian Processes
af: Polymenakos, K, et al.
Udgivet: (2021) -
Statistical guarantees for the robustness of Bayesian neural networks
af: Cardelli, L, et al.
Udgivet: (2019) -
On the adversarial robustness of Gaussian processes
af: Patanè, A
Udgivet: (2020)