On the robustness of semantic segmentation models to adversarial attacks

Deep Neural Networks (DNNs) have demonstrated exceptional performance on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been ex...

Full description

Bibliographic Details
Main Authors: Arnab, A, Miksik, O, Torr, PHS
Format: Journal article
Language:English
Published: Institute of Electrical and Electronics Engineers 2019
_version_ 1826293123423141888
author Arnab, A
Miksik, O
Torr, PHS
author_facet Arnab, A
Miksik, O
Torr, PHS
author_sort Arnab, A
collection OXFORD
description Deep Neural Networks (DNNs) have demonstrated exceptional performance on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and structured prediction tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models, multiscale processing (and more generally, input transformations) naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show how to effectively benchmark robustness and show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.
first_indexed 2024-03-07T03:25:14Z
format Journal article
id oxford-uuid:b8cf8a4d-4005-49ca-b7a0-22cc03eb5469
institution University of Oxford
language English
last_indexed 2024-03-07T03:25:14Z
publishDate 2019
publisher Institute of Electrical and Electronics Engineers
record_format dspace
spelling oxford-uuid:b8cf8a4d-4005-49ca-b7a0-22cc03eb54692022-03-27T04:58:32ZOn the robustness of semantic segmentation models to adversarial attacksJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:b8cf8a4d-4005-49ca-b7a0-22cc03eb5469EnglishSymplectic ElementsInstitute of Electrical and Electronics Engineers2019Arnab, AMiksik, OTorr, PHSDeep Neural Networks (DNNs) have demonstrated exceptional performance on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and structured prediction tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models, multiscale processing (and more generally, input transformations) naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show how to effectively benchmark robustness and show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.
spellingShingle Arnab, A
Miksik, O
Torr, PHS
On the robustness of semantic segmentation models to adversarial attacks
title On the robustness of semantic segmentation models to adversarial attacks
title_full On the robustness of semantic segmentation models to adversarial attacks
title_fullStr On the robustness of semantic segmentation models to adversarial attacks
title_full_unstemmed On the robustness of semantic segmentation models to adversarial attacks
title_short On the robustness of semantic segmentation models to adversarial attacks
title_sort on the robustness of semantic segmentation models to adversarial attacks
work_keys_str_mv AT arnaba ontherobustnessofsemanticsegmentationmodelstoadversarialattacks
AT miksiko ontherobustnessofsemanticsegmentationmodelstoadversarialattacks
AT torrphs ontherobustnessofsemanticsegmentationmodelstoadversarialattacks