Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
<jats:p>Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously cra...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Association for the Advancement of Artificial Intelligence (AAAI)
2022
|
Online Access: | https://hdl.handle.net/1721.1/143905 |