Adversarial attacks and defenses in natural language processing
Deep neural networks (DNNs) are becoming increasingly successful in many fields. However, DNNs are shown to be strikingly susceptible to adversarial examples. For instance, models pre-trained on very large corpora can still be easily fooled by word substitution attacks using only synonyms. This ph...
Autor principal: | |
---|---|
Outros Autores: | |
Formato: | Thesis-Master by Research |
Idioma: | English |
Publicado em: |
Nanyang Technological University
2022
|
Assuntos: | |
Acesso em linha: | https://hdl.handle.net/10356/159029 |