Low-Pass Image Filtering to Achieve Adversarial Robustness
In this paper, we continue the research cycle on the properties of convolutional neural network-based image recognition systems and ways to improve noise immunity and robustness. Currently, a popular research area related to artificial neural networks is adversarial attacks. The adversarial attacks...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-11-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/22/9032 |
_version_ | 1827638855998636032 |
---|---|
author | Vadim Ziyadinov Maxim Tereshonok |
author_facet | Vadim Ziyadinov Maxim Tereshonok |
author_sort | Vadim Ziyadinov |
collection | DOAJ |
description | In this paper, we continue the research cycle on the properties of convolutional neural network-based image recognition systems and ways to improve noise immunity and robustness. Currently, a popular research area related to artificial neural networks is adversarial attacks. The adversarial attacks on the image are not highly perceptible to the human eye, and they also drastically reduce the neural network’s accuracy. Image perception by a machine is highly dependent on the propagation of high frequency distortions throughout the network. At the same time, a human efficiently ignores high-frequency distortions, perceiving the shape of objects as a whole. We propose a technique to reduce the influence of high-frequency noise on the CNNs. We show that low-pass image filtering can improve the image recognition accuracy in the presence of high-frequency distortions in particular, caused by adversarial attacks. This technique is resource efficient and easy to implement. The proposed technique makes it possible to measure up the logic of an artificial neural network to that of a human, for whom high-frequency distortions are not decisive in object recognition. |
first_indexed | 2024-03-09T16:28:41Z |
format | Article |
id | doaj.art-ae0445d7f7fa45bc96ca4ac4177f1ede |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T16:28:41Z |
publishDate | 2023-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-ae0445d7f7fa45bc96ca4ac4177f1ede2023-11-24T15:05:04ZengMDPI AGSensors1424-82202023-11-012322903210.3390/s23229032Low-Pass Image Filtering to Achieve Adversarial RobustnessVadim Ziyadinov0Maxim Tereshonok1Science and Research Department, Moscow Technical University of Communications and Informatics, 111024 Moscow, RussiaScience and Research Department, Moscow Technical University of Communications and Informatics, 111024 Moscow, RussiaIn this paper, we continue the research cycle on the properties of convolutional neural network-based image recognition systems and ways to improve noise immunity and robustness. Currently, a popular research area related to artificial neural networks is adversarial attacks. The adversarial attacks on the image are not highly perceptible to the human eye, and they also drastically reduce the neural network’s accuracy. Image perception by a machine is highly dependent on the propagation of high frequency distortions throughout the network. At the same time, a human efficiently ignores high-frequency distortions, perceiving the shape of objects as a whole. We propose a technique to reduce the influence of high-frequency noise on the CNNs. We show that low-pass image filtering can improve the image recognition accuracy in the presence of high-frequency distortions in particular, caused by adversarial attacks. This technique is resource efficient and easy to implement. The proposed technique makes it possible to measure up the logic of an artificial neural network to that of a human, for whom high-frequency distortions are not decisive in object recognition.https://www.mdpi.com/1424-8220/23/22/9032adversarial attacksartificial neural networksrobustnessimage filteringconvolutional neural networksimage recognition |
spellingShingle | Vadim Ziyadinov Maxim Tereshonok Low-Pass Image Filtering to Achieve Adversarial Robustness Sensors adversarial attacks artificial neural networks robustness image filtering convolutional neural networks image recognition |
title | Low-Pass Image Filtering to Achieve Adversarial Robustness |
title_full | Low-Pass Image Filtering to Achieve Adversarial Robustness |
title_fullStr | Low-Pass Image Filtering to Achieve Adversarial Robustness |
title_full_unstemmed | Low-Pass Image Filtering to Achieve Adversarial Robustness |
title_short | Low-Pass Image Filtering to Achieve Adversarial Robustness |
title_sort | low pass image filtering to achieve adversarial robustness |
topic | adversarial attacks artificial neural networks robustness image filtering convolutional neural networks image recognition |
url | https://www.mdpi.com/1424-8220/23/22/9032 |
work_keys_str_mv | AT vadimziyadinov lowpassimagefilteringtoachieveadversarialrobustness AT maximtereshonok lowpassimagefilteringtoachieveadversarialrobustness |