Textual Backdoor Defense via Poisoned Sample Recognition
Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attacks based on data poisoning in existing research is as high as 100%. In order to enhance the natural language processing model’s defense against backdoor attacks, we propose a textual backdoor defense m...
Main Authors: | Kun Shao, Yu Zhang, Junan Yang, Hui Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-10-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/11/21/9938 |
Similar Items
-
A Textual Backdoor Defense Method Based on Deep Feature Classification
by: Kun Shao, et al.
Published: (2023-01-01) -
Backdoor Pony: Evaluating backdoor attacks and defenses in different domains
by: Arthur Mercier, et al.
Published: (2023-05-01) -
A Comprehensive Survey on Backdoor Attacks and Their Defenses in Face Recognition Systems
by: Quentin Le Roux, et al.
Published: (2024-01-01) -
Defending Against Backdoor Attacks by Quarantine Training
by: Chengxu Yu, et al.
Published: (2024-01-01) -
Survey on Backdoor Attacks and Countermeasures in Deep Neural Network
by: QIAN Hanwei, SUN Weisong
Published: (2023-05-01)