Dual-Targeted Textfooler Attack on Text Classification Systems
Deep neural networks provide good performance on classification tasks such as those for image, audio, and text classification. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a sample created by adding a small adversarial noise to an original data samp...
Main Author: | Hyun Kwon |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9580824/ |
Similar Items
-
Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
by: Hyun Kwon, et al.
Published: (2019-01-01) -
AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
by: Hyun Kwon, et al.
Published: (2024-01-01) -
Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
by: Hyun Kwon, et al.
Published: (2021-03-01) -
Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes
by: Hyun Kwon, et al.
Published: (2019-01-01) -
Text Adversarial Examples Generation and Defense Based on Reinforcement Learning
by: Yue Li*, et al.
Published: (2021-01-01)