Textual Backdoor Defense via Poisoned Sample Recognition

Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attacks based on data poisoning in existing research is as high as 100%. In order to enhance the natural language processing model’s defense against backdoor attacks, we propose a textual backdoor defense m...

Full description

Bibliographic Details
Main Authors: Kun Shao, Yu Zhang, Junan Yang, Hui Liu
Format: Article
Language:English
Published: MDPI AG 2021-10-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/11/21/9938

Similar Items