Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter
The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-11-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/13/12/555 |
_version_ | 1827638466252374016 |
---|---|
author | Fabio Carrara Roberto Caldelli Fabrizio Falchi Giuseppe Amato |
author_facet | Fabio Carrara Roberto Caldelli Fabrizio Falchi Giuseppe Amato |
author_sort | Fabio Carrara |
collection | DOAJ |
description | The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such technologies is always present. This work analyzes the robustness characteristics of a specific kind of deep neural network, the neural ordinary differential equations (N-ODE) network. They seem very interesting for their effectiveness and a peculiar property based on a test-time tunable parameter that permits obtaining a trade-off between accuracy and efficiency. In addition, adjusting such a tolerance parameter grants robustness against adversarial attacks. Notably, it is worth highlighting how decoupling the values of such a tolerance between training and test time can strongly reduce the attack success rate. On this basis, we show how such tolerance can be adopted, during the prediction phase, to improve the robustness of N-ODE to adversarial attacks. In particular, we demonstrate how we can exploit this property to construct an effective detection strategy and increase the chances of identifying adversarial examples in a non-zero knowledge attack scenario. Our experimental evaluation involved two standard image classification benchmarks. This showed that the proposed detection technique provides high rejection of adversarial examples while maintaining most of the pristine samples. |
first_indexed | 2024-03-09T16:17:45Z |
format | Article |
id | doaj.art-6cb8841ae23e4f93adf9fa2a655f739c |
institution | Directory Open Access Journal |
issn | 2078-2489 |
language | English |
last_indexed | 2024-03-09T16:17:45Z |
publishDate | 2022-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj.art-6cb8841ae23e4f93adf9fa2a655f739c2023-11-24T15:37:03ZengMDPI AGInformation2078-24892022-11-01131255510.3390/info13120555Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance ParameterFabio Carrara0Roberto Caldelli1Fabrizio Falchi2Giuseppe Amato3Istituto di Scienza e Tecnologie dell’Informazione, 56124 Pisa, ItalyMedia Integration and Communication Center, National Inter-University Consortium for Telecommunications (CNIT), 50134 Florence, ItalyIstituto di Scienza e Tecnologie dell’Informazione, 56124 Pisa, ItalyIstituto di Scienza e Tecnologie dell’Informazione, 56124 Pisa, ItalyThe adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such technologies is always present. This work analyzes the robustness characteristics of a specific kind of deep neural network, the neural ordinary differential equations (N-ODE) network. They seem very interesting for their effectiveness and a peculiar property based on a test-time tunable parameter that permits obtaining a trade-off between accuracy and efficiency. In addition, adjusting such a tolerance parameter grants robustness against adversarial attacks. Notably, it is worth highlighting how decoupling the values of such a tolerance between training and test time can strongly reduce the attack success rate. On this basis, we show how such tolerance can be adopted, during the prediction phase, to improve the robustness of N-ODE to adversarial attacks. In particular, we demonstrate how we can exploit this property to construct an effective detection strategy and increase the chances of identifying adversarial examples in a non-zero knowledge attack scenario. Our experimental evaluation involved two standard image classification benchmarks. This showed that the proposed detection technique provides high rejection of adversarial examples while maintaining most of the pristine samples.https://www.mdpi.com/2078-2489/13/12/555neural ordinary differential equationadversarial defenseimage classification |
spellingShingle | Fabio Carrara Roberto Caldelli Fabrizio Falchi Giuseppe Amato Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter Information neural ordinary differential equation adversarial defense image classification |
title | Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter |
title_full | Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter |
title_fullStr | Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter |
title_full_unstemmed | Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter |
title_short | Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter |
title_sort | improving the adversarial robustness of neural ode image classifiers by tuning the tolerance parameter |
topic | neural ordinary differential equation adversarial defense image classification |
url | https://www.mdpi.com/2078-2489/13/12/555 |
work_keys_str_mv | AT fabiocarrara improvingtheadversarialrobustnessofneuralodeimageclassifiersbytuningthetoleranceparameter AT robertocaldelli improvingtheadversarialrobustnessofneuralodeimageclassifiersbytuningthetoleranceparameter AT fabriziofalchi improvingtheadversarialrobustnessofneuralodeimageclassifiersbytuningthetoleranceparameter AT giuseppeamato improvingtheadversarialrobustnessofneuralodeimageclassifiersbytuningthetoleranceparameter |