SegPGD: an effective and efficient adversarial attack for evaluating and boosting segmentation robustness
Deep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed t...
主要な著者: | , , , |
---|---|
フォーマット: | Conference item |
言語: | English |
出版事項: |
Springer
2022
|