Detecting and Isolating Adversarial Attacks Using Characteristics of the Surrogate Model Framework
The paper introduces a novel framework for detecting adversarial attacks on machine learning models that classify tabular data. Its purpose is to provide a robust method for the monitoring and continuous auditing of machine learning models for the purpose of detecting malicious data alterations. The...
Main Authors: | Piotr Biczyk, Łukasz Wawrowski |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/17/9698 |
Similar Items
-
Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology
by: Agnieszka M. Zbrzezny, et al.
Published: (2023-05-01) -
Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems
by: Marjorie Kinney, et al.
Published: (2024-04-01) -
Exploiting device-level non-idealities for adversarial attacks on ReRAM-based neural networks
by: Tyler McLemore, et al.
Published: (2023-07-01) -
Editorial: Explainable, Trustworthy, and Responsible AI for the Financial Service Industry
by: Branka Hadji Misheva, et al.
Published: (2022-05-01) -
Artificial Intelligence for Predictive Maintenance Applications: Key Components, Trustworthiness, and Future Trends
by: Aysegul Ucar, et al.
Published: (2024-01-01)