Instance-Agnostic and Practical Clean Label Backdoor Attack Method for Deep Learning Based Face Recognition Models
Backdoor attacks, which induce a trained model to behave as intended by an adversary for specific inputs, have recently emerged as a serious security threat in deep learning-based classification models. In particular, because a backdoor attack is executed solely by incorporating a small quantity of...
Main Authors: | Tae-Hoon Kim, Seok-Hwan Choi, Yoon-Ho Choi |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10360130/ |
Similar Items
-
Feature Importance-Based Backdoor Attack in NSL-KDD
by: Jinhyeok Jang, et al.
Published: (2023-12-01) -
Backdoor Pony: Evaluating backdoor attacks and defenses in different domains
by: Arthur Mercier, et al.
Published: (2023-05-01) -
Backdoor Attack on Deep Learning Models:A Survey
by: YING Zonghao, WU Bin
Published: (2023-03-01) -
A Backdoor Attack Against LSTM-Based Text Classification Systems
by: Jiazhu Dai, et al.
Published: (2019-01-01) -
Survey on Backdoor Attacks and Countermeasures in Deep Neural Network
by: QIAN Hanwei, SUN Weisong
Published: (2023-05-01)