Certifiers make neural networks vulnerable to availability attacks

To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some predictions that a certain class of manipulati...

Бүрэн тодорхойлолт

Номзүйн дэлгэрэнгүй
Үндсэн зохиолчид: Lorenz, T, Kwiatkowska, M, Fritz, M
Формат: Conference item
Хэл сонгох:English
Хэвлэсэн: Association for Computing Machinery 2023

Ижил төстэй зүйлс