Certifiers make neural networks vulnerable to availability attacks
To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some predictions that a certain class of manipulati...
Những tác giả chính: | Lorenz, T, Kwiatkowska, M, Fritz, M |
---|---|
Định dạng: | Conference item |
Ngôn ngữ: | English |
Được phát hành: |
Association for Computing Machinery
2023
|
Những quyển sách tương tự
-
FullCert: deterministic end-to-end certification for training and inference of neural networks
Bằng: Lorenz, T, et al.
Được phát hành: (2024) -
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Bằng: Jiehang Zeng, et al.
Được phát hành: (2023-06-01) -
Attack Vulnerability of Network Controllability.
Bằng: Zhe-Ming Lu, et al.
Được phát hành: (2016-01-01) -
Bayesian inference with certifiable adversarial robustness
Bằng: Wicker, M, et al.
Được phát hành: (2021) -
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
Bằng: Liu, Wenye, et al.
Được phát hành: (2020)