Anomalous Example Detection in Deep Learning: A Survey
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs. To make DL more robust, several posthoc (or runtime) anomaly detection techniques to detect (and discard) these anomalous samples have been proposed in the recent past. This survey tries...
Main Authors: | Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9144212/ |
Similar Items
-
Deep neural rejection against adversarial examples
by: Angelo Sotgiu, et al.
Published: (2020-04-01) -
A Universal Detection Method for Adversarial Examples and Fake Images
by: Jiewei Lai, et al.
Published: (2022-04-01) -
A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
by: Mohammed Nasser Al-Andoli, et al.
Published: (2024-01-01) -
ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
by: Seok-Hwan Choi, et al.
Published: (2022-01-01) -
Editorial: Safe and Trustworthy Machine Learning
by: Bhavya Kailkhura, et al.
Published: (2021-08-01)