SegPGD: an effective and efficient adversarial attack for evaluating and boosting segmentation robustness

Deep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed t...

詳細記述

書誌詳細
主要な著者: Gu, J, Zhao, H, Tresp, V, Torr, PHS
フォーマット: Conference item
言語:English
出版事項: Springer 2022