Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
<jats:p>Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously cra...
Main Authors: | Jin, Di, Jin, Zhijing, Zhou, Joey Tianyi, Szolovits, Peter |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
Association for the Advancement of Artificial Intelligence (AAAI)
2022
|
Online Access: | https://hdl.handle.net/1721.1/143905 |
Similar Items
-
Hooks in the Headline: Learning to Generate Headlines with Controlled Styles
by: Jin, Di, et al.
Published: (2022) -
On Actuality Entailments, Causation, and Telicity in Balkar
by: Privoznov, Dmitry
Published: (2023) -
Collaborative Learning in Tertiary Education Classrooms: What Does It Entail?
by: Awang-Hashim, Rosna, et al.
Published: (2023) -
Hierarchical neural networks for sequential sentence classification in medical scientific abstracts
by: Jin, Di, et al.
Published: (2021) -
Stubborn: A Strong Baseline for the Indoor Object Navigation Task
by: Luo, Haokuan
Published: (2022)