Adversarial attacks and defenses in natural language processing

Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This ph...

ver descrição completa

Detalhes bibliográficos
Autor principal: Dong, Xinshuai
Outros Autores: Luu Anh Tuan
Formato: Thesis-Master by Research
Idioma:English
Publicado em: Nanyang Technological University 2022
Assuntos:
Acesso em linha:https://hdl.handle.net/10356/159029