Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions
This comprehensive analysis thoroughly examines the topic of adversarial attacks in artificial intelligence (AI), providing a detailed overview of the various methods used to compromise machine learning models. It explores different attack techniques, ranging from the simple Fast Gradient Sign Metho...
Main Authors: | , |
---|---|
Format: | Internet publication |
Language: | English |
Published: |
2023
|
_version_ | 1817932538179485696 |
---|---|
author | Radanliev, P Santos, O |
author_facet | Radanliev, P Santos, O |
author_sort | Radanliev, P |
collection | OXFORD |
description | This comprehensive analysis thoroughly examines the topic of adversarial attacks in artificial intelligence (AI), providing a detailed overview of the various methods used to compromise machine learning models. It explores different attack techniques, ranging from the simple Fast Gradient Sign Method (FGSM) to the intricate Carlini and Wagner Attack (C&W), emphasising the wide range of adversarial approaches and their intended goals. The discussion also distinguishes between targeted and non-targeted attacks, highlighting the adaptability and versatility of these malicious efforts. Additionally, the study delves into the realm of black-box attacks, revealing the capability of adversarial strategies to compromise models even with limited knowledge. Real-life examples illustrate the tangible consequences and potential dangers of adversarial attacks in various fields such as self-driving cars, multimedia, and voice assistants. These cases highlight the difficulties in ensuring the legitimacy and dependability of AI-powered technologies and programs. The article stresses the importance of ongoing research and innovation to address the growing difficulties posed by advanced methods like deepfakes and disguised voice commands in preserving the security of AI systems. This study provides valuable insights on how different adversarial strategies and defence mechanisms interact within AI. The results emphasise the urgent need for stronger and more secure AI models to combat the increasing number of adversarial threats in today's AI landscape. These findings can guide future research and innovations in developing more resilient AI technologies that can better withstand various adversarial vulnerabilities and challenges. |
first_indexed | 2024-03-07T08:20:21Z |
format | Internet publication |
id | oxford-uuid:88d38b80-fddb-46c7-b98c-63eb27678985 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:39:30Z |
publishDate | 2023 |
record_format | dspace |
spelling | oxford-uuid:88d38b80-fddb-46c7-b98c-63eb276789852024-12-06T10:33:18ZAdversarial attacks can deceive AI systems, leading to misclassification or incorrect decisionsInternet publicationhttp://purl.org/coar/resource_type/c_7ad9uuid:88d38b80-fddb-46c7-b98c-63eb27678985EnglishSymplectic Elements2023Radanliev, PSantos, OThis comprehensive analysis thoroughly examines the topic of adversarial attacks in artificial intelligence (AI), providing a detailed overview of the various methods used to compromise machine learning models. It explores different attack techniques, ranging from the simple Fast Gradient Sign Method (FGSM) to the intricate Carlini and Wagner Attack (C&W), emphasising the wide range of adversarial approaches and their intended goals. The discussion also distinguishes between targeted and non-targeted attacks, highlighting the adaptability and versatility of these malicious efforts. Additionally, the study delves into the realm of black-box attacks, revealing the capability of adversarial strategies to compromise models even with limited knowledge. Real-life examples illustrate the tangible consequences and potential dangers of adversarial attacks in various fields such as self-driving cars, multimedia, and voice assistants. These cases highlight the difficulties in ensuring the legitimacy and dependability of AI-powered technologies and programs. The article stresses the importance of ongoing research and innovation to address the growing difficulties posed by advanced methods like deepfakes and disguised voice commands in preserving the security of AI systems. This study provides valuable insights on how different adversarial strategies and defence mechanisms interact within AI. The results emphasise the urgent need for stronger and more secure AI models to combat the increasing number of adversarial threats in today's AI landscape. These findings can guide future research and innovations in developing more resilient AI technologies that can better withstand various adversarial vulnerabilities and challenges. |
spellingShingle | Radanliev, P Santos, O Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions |
title | Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions |
title_full | Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions |
title_fullStr | Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions |
title_full_unstemmed | Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions |
title_short | Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions |
title_sort | adversarial attacks can deceive ai systems leading to misclassification or incorrect decisions |
work_keys_str_mv | AT radanlievp adversarialattackscandeceiveaisystemsleadingtomisclassificationorincorrectdecisions AT santoso adversarialattackscandeceiveaisystemsleadingtomisclassificationorincorrectdecisions |