Adversarial attacks and defenses in natural language processing

Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This ph...

Full description

Bibliographic Details
Main Author: Dong, Xinshuai
Other Authors: Luu Anh Tuan
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/159029