Universal Adversarial Attack via Conditional Sampling for Text Classification

Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it has been revealed that DNNs are vulnerable in the face of adversarial examples, which are maliciously crafted by adding human-imperceptible perturbations to an original sample to cause the wrong output...

Full description

Bibliographic Details
Main Authors: Yu Zhang, Kun Shao, Junan Yang, Hui Liu
Format: Article
Language:English
Published: MDPI AG 2021-10-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/11/20/9539