Certifiers make neural networks vulnerable to availability attacks
To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some predictions that a certain class of manipulati...
Asıl Yazarlar: | Lorenz, T, Kwiatkowska, M, Fritz, M |
---|---|
Materyal Türü: | Conference item |
Dil: | English |
Baskı/Yayın Bilgisi: |
Association for Computing Machinery
2023
|
Benzer Materyaller
-
FullCert: deterministic end-to-end certification for training and inference of neural networks
Yazar:: Lorenz, T, ve diğerleri
Baskı/Yayın Bilgisi: (2024) -
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Yazar:: Jiehang Zeng, ve diğerleri
Baskı/Yayın Bilgisi: (2023-06-01) -
Attack Vulnerability of Network Controllability.
Yazar:: Zhe-Ming Lu, ve diğerleri
Baskı/Yayın Bilgisi: (2016-01-01) -
Bayesian inference with certifiable adversarial robustness
Yazar:: Wicker, M, ve diğerleri
Baskı/Yayın Bilgisi: (2021) -
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
Yazar:: Liu, Wenye, ve diğerleri
Baskı/Yayın Bilgisi: (2020)