Dual-Targeted Textfooler Attack on Text Classification Systems
Deep neural networks provide good performance on classification tasks such as those for image, audio, and text classification. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a sample created by adding a small adversarial noise to an original data samp...
Main Author: | Hyun Kwon |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9580824/ |
Similar Items
-
AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
by: Hyun Kwon, et al.
Published: (2024-01-01) -
Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
by: Hyun Kwon, et al.
Published: (2021-03-01) -
Text Adversarial Examples Generation and Defense Based on Reinforcement Learning
by: Yue Li*, et al.
Published: (2021-01-01) -
Class Discriminative Universal Adversarial Attack for Text Classification
by: HAO Zhi-rong, CHEN Long, HUANG Jia-cheng
Published: (2022-08-01) -
A Hybrid Adversarial Attack for Different Application Scenarios
by: Xiaohu Du, et al.
Published: (2020-05-01)