Certifiers make neural networks vulnerable to availability attacks
To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some predictions that a certain class of manipulati...
Main Authors: | Lorenz, T, Kwiatkowska, M, Fritz, M |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Association for Computing Machinery
2023
|
Similar Items
-
FullCert: deterministic end-to-end certification for training and inference of neural networks
by: Lorenz, T, et al.
Published: (2024) -
Certified grasping
by: Aceituno-Cabezas, Bernardo, et al.
Published: (2024) -
Leveraging imperfect restoration for data availability attack
by: Huang, Yi, et al.
Published: (2024) -
'Make flu shots available through Socso scheme'
by: Yusof, Teh Athira
Published: (2024) -
Backdoor attacks in neural networks
by: Liew, Sher Yun
Published: (2024)