Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems
Adversarial machine learning (AML) is a class of data manipulation techniques that cause alterations in the behavior of artificial intelligence (AI) systems while going unnoticed by humans. These alterations can cause serious vulnerabilities to mission-critical AI-enabled applications. This work int...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-09-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/18/6905 |
_version_ | 1797482540825575424 |
---|---|
author | Theodora Anastasiou Sophia Karagiorgou Petros Petrou Dimitrios Papamartzivanos Thanassis Giannetsos Georgia Tsirigotaki Jelle Keizer |
author_facet | Theodora Anastasiou Sophia Karagiorgou Petros Petrou Dimitrios Papamartzivanos Thanassis Giannetsos Georgia Tsirigotaki Jelle Keizer |
author_sort | Theodora Anastasiou |
collection | DOAJ |
description | Adversarial machine learning (AML) is a class of data manipulation techniques that cause alterations in the behavior of artificial intelligence (AI) systems while going unnoticed by humans. These alterations can cause serious vulnerabilities to mission-critical AI-enabled applications. This work introduces an AI architecture augmented with adversarial examples and defense algorithms to safeguard, secure, and make more reliable AI systems. This can be conducted by robustifying deep neural network (DNN) classifiers and explicitly focusing on the specific case of convolutional neural networks (CNNs) used in non-trivial manufacturing environments prone to noise, vibrations, and errors when capturing and transferring data. The proposed architecture enables the imitation of the interplay between the attacker and a defender based on the deployment and cross-evaluation of adversarial and defense strategies. The AI architecture enables (i) the creation and usage of <i>adversarial examples</i> in the training process, which robustify the accuracy of CNNs, (ii) the evaluation of <i>defense algorithms</i> to recover the classifiers’ accuracy, and (iii) the provision of a <i>multiclass discriminator</i> to distinguish and report on non-attacked and attacked data. The experimental results show promising results in a hybrid solution combining the defense algorithms and the multiclass discriminator in an effort to revitalize the attacked base models and robustify the DNN classifiers. The proposed architecture is ratified in the context of a real manufacturing environment utilizing datasets stemming from the actual production lines. |
first_indexed | 2024-03-09T22:33:52Z |
format | Article |
id | doaj.art-6ef93e96a5804c1d9161521e5b33824f |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T22:33:52Z |
publishDate | 2022-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-6ef93e96a5804c1d9161521e5b33824f2023-11-23T18:51:14ZengMDPI AGSensors1424-82202022-09-012218690510.3390/s22186905Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence SystemsTheodora Anastasiou0Sophia Karagiorgou1Petros Petrou2Dimitrios Papamartzivanos3Thanassis Giannetsos4Georgia Tsirigotaki5Jelle Keizer6UBITECH Ltd., Thessalias 8 and Etolias 10, GR-15231 Chalandri, GreeceUBITECH Ltd., Thessalias 8 and Etolias 10, GR-15231 Chalandri, GreeceUBITECH Ltd., Thessalias 8 and Etolias 10, GR-15231 Chalandri, GreeceUBITECH Ltd., Thessalias 8 and Etolias 10, GR-15231 Chalandri, GreeceUBITECH Ltd., Thessalias 8 and Etolias 10, GR-15231 Chalandri, GreeceHellenic Army Information Technology Support Center, 227-231, Mesogeion Ave., GR-15451 Holargos, GreecePhilips, Oliemolenstraat 5, 9203 ZN Drachten, The NetherlandsAdversarial machine learning (AML) is a class of data manipulation techniques that cause alterations in the behavior of artificial intelligence (AI) systems while going unnoticed by humans. These alterations can cause serious vulnerabilities to mission-critical AI-enabled applications. This work introduces an AI architecture augmented with adversarial examples and defense algorithms to safeguard, secure, and make more reliable AI systems. This can be conducted by robustifying deep neural network (DNN) classifiers and explicitly focusing on the specific case of convolutional neural networks (CNNs) used in non-trivial manufacturing environments prone to noise, vibrations, and errors when capturing and transferring data. The proposed architecture enables the imitation of the interplay between the attacker and a defender based on the deployment and cross-evaluation of adversarial and defense strategies. The AI architecture enables (i) the creation and usage of <i>adversarial examples</i> in the training process, which robustify the accuracy of CNNs, (ii) the evaluation of <i>defense algorithms</i> to recover the classifiers’ accuracy, and (iii) the provision of a <i>multiclass discriminator</i> to distinguish and report on non-attacked and attacked data. The experimental results show promising results in a hybrid solution combining the defense algorithms and the multiclass discriminator in an effort to revitalize the attacked base models and robustify the DNN classifiers. The proposed architecture is ratified in the context of a real manufacturing environment utilizing datasets stemming from the actual production lines.https://www.mdpi.com/1424-8220/22/18/6905adversarial machine learningadversarial trainingAI security |
spellingShingle | Theodora Anastasiou Sophia Karagiorgou Petros Petrou Dimitrios Papamartzivanos Thanassis Giannetsos Georgia Tsirigotaki Jelle Keizer Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems Sensors adversarial machine learning adversarial training AI security |
title | Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems |
title_full | Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems |
title_fullStr | Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems |
title_full_unstemmed | Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems |
title_short | Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems |
title_sort | towards robustifying image classifiers against the perils of adversarial attacks on artificial intelligence systems |
topic | adversarial machine learning adversarial training AI security |
url | https://www.mdpi.com/1424-8220/22/18/6905 |
work_keys_str_mv | AT theodoraanastasiou towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems AT sophiakaragiorgou towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems AT petrospetrou towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems AT dimitriospapamartzivanos towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems AT thanassisgiannetsos towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems AT georgiatsirigotaki towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems AT jellekeizer towardsrobustifyingimageclassifiersagainsttheperilsofadversarialattacksonartificialintelligencesystems |