Certifiers make neural networks vulnerable to availability attacks

To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some predictions that a certain class of manipulati...

詳細記述

書誌詳細
主要な著者: Lorenz, T, Kwiatkowska, M, Fritz, M
フォーマット: Conference item
言語:English
出版事項: Association for Computing Machinery 2023