Robustness of 3D deep learning in an adversarial setting

Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation. Deep learning has revolutionized state-of-the-art performance for tasks in 3D environments; however, relatively little is known about...

Full description

Bibliographic Details
Main Authors: Wicker, M, Kwiatkowska, M
Format: Conference item
Language:English
Published: IEEE 2020
_version_ 1797052108649791488
author Wicker, M
Kwiatkowska, M
author_facet Wicker, M
Kwiatkowska, M
author_sort Wicker, M
collection OXFORD
description Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation. Deep learning has revolutionized state-of-the-art performance for tasks in 3D environments; however, relatively little is known about the robustness of these approaches in an adversarial setting. The lack of comprehensive analysis makes it difficult to justify deployment of 3D deep learning models in real-world, safety-critical applications. In this work, we develop an algorithm for analysis of pointwise robustness of neural networks that operate on 3D data. We show that current approaches presented for understanding the resilience of state-of-the-art models vastly overestimate their robustness. We then use our algorithm to evaluate an array of state-of-the-art models in order to demonstrate their vulnerability to occlusion attacks. We show that, in the worst case, these networks can be reduced to 0% classification accuracy after the occlusion of at most 6.5% of the occupied input space.
first_indexed 2024-03-06T18:28:06Z
format Conference item
id oxford-uuid:08adb6c6-dbc1-4544-8b1b-1bcc1ddadcf8
institution University of Oxford
language English
last_indexed 2024-03-06T18:28:06Z
publishDate 2020
publisher IEEE
record_format dspace
spelling oxford-uuid:08adb6c6-dbc1-4544-8b1b-1bcc1ddadcf82022-03-26T09:14:19ZRobustness of 3D deep learning in an adversarial settingConference itemhttp://purl.org/coar/resource_type/c_5794uuid:08adb6c6-dbc1-4544-8b1b-1bcc1ddadcf8EnglishSymplectic Elements at OxfordIEEE2020Wicker, MKwiatkowska, MUnderstanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation. Deep learning has revolutionized state-of-the-art performance for tasks in 3D environments; however, relatively little is known about the robustness of these approaches in an adversarial setting. The lack of comprehensive analysis makes it difficult to justify deployment of 3D deep learning models in real-world, safety-critical applications. In this work, we develop an algorithm for analysis of pointwise robustness of neural networks that operate on 3D data. We show that current approaches presented for understanding the resilience of state-of-the-art models vastly overestimate their robustness. We then use our algorithm to evaluate an array of state-of-the-art models in order to demonstrate their vulnerability to occlusion attacks. We show that, in the worst case, these networks can be reduced to 0% classification accuracy after the occlusion of at most 6.5% of the occupied input space.
spellingShingle Wicker, M
Kwiatkowska, M
Robustness of 3D deep learning in an adversarial setting
title Robustness of 3D deep learning in an adversarial setting
title_full Robustness of 3D deep learning in an adversarial setting
title_fullStr Robustness of 3D deep learning in an adversarial setting
title_full_unstemmed Robustness of 3D deep learning in an adversarial setting
title_short Robustness of 3D deep learning in an adversarial setting
title_sort robustness of 3d deep learning in an adversarial setting
work_keys_str_mv AT wickerm robustnessof3ddeeplearninginanadversarialsetting
AT kwiatkowskam robustnessof3ddeeplearninginanadversarialsetting