SegPGD: an effective and efficient adversarial attack for evaluating and boosting segmentation robustness

Deep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed t...

Ausführliche Beschreibung

Bibliographische Detailangaben
Hauptverfasser: Gu, J, Zhao, H, Tresp, V, Torr, PHS
Format: Conference item
Sprache:English
Veröffentlicht: Springer 2022