Adversarial attacks can deceive AI systems, leading to misclassification or incorrect decisions
This comprehensive analysis thoroughly examines the topic of adversarial attacks in artificial intelligence (AI), providing a detailed overview of the various methods used to compromise machine learning models. It explores different attack techniques, ranging from the simple Fast Gradient Sign Metho...
المؤلفون الرئيسيون: | Radanliev, P, Santos, O |
---|---|
التنسيق: | Internet publication |
اللغة: | English |
منشور في: |
2023
|
مواد مشابهة
-
MA‐CAT: Misclassification‐Aware Contrastive Adversarial Training
حسب: Hongxin Zhi, وآخرون
منشور في: (2024-05-01) -
Frida Kahlo: Appearances Can Be Deceiving
حسب: Berit Potter
منشور في: (2021-06-01) -
Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks
حسب: Yebon Kim, وآخرون
منشور في: (2024-01-01) -
Deceived /
حسب: Barrett, Maria
منشور في: (1994) -
The deceiver /
حسب: Forsyth, Frederick, 1938-
منشور في: (1992)