A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
Deep learning (DL) has demonstrated remarkable achievements in various fields. Nevertheless, DL models encounter significant challenges in detecting and defending against adversarial samples (AEs). These AEs are meticulously crafted by adversaries, introducing imperceptible perturbations to clean da...
Main Authors: | Mohammed Nasser Al-Andoli, Shing Chiang Tan, Kok Swee Sim, Pey Yun Goh, Chee Peng Lim |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10400453/ |
Similar Items
-
Maxwell’s Demon in MLP-Mixer: towards transferable adversarial attacks
by: Haoran Lyu, et al.
Published: (2024-03-01) -
Adversarial attacks and defenses in deep learning
by: Ximeng LIU, et al.
Published: (2020-10-01) -
Adversarial Attacks to Manipulate Target Localization of Object Detector
by: Kai Xu, et al.
Published: (2024-01-01) -
On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
by: Sanglee Park, et al.
Published: (2020-11-01) -
A Hybrid Adversarial Attack for Different Application Scenarios
by: Xiaohu Du, et al.
Published: (2020-05-01)