Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment

<jats:p>Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously cra...

Full description

Bibliographic Details
Main Authors: Jin, Di, Jin, Zhijing, Zhou, Joey Tianyi, Szolovits, Peter
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: Association for the Advancement of Artificial Intelligence (AAAI) 2022
Online Access:https://hdl.handle.net/1721.1/143905

Similar Items