Measuring and Improving Consistency in Pretrained Language Models
AbstractConsistency of a model—that is, the invariance of its behavior under meaning-preserving alternations in its input—is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect...
Main Authors: | Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2021-01-01
|
Series: | Transactions of the Association for Computational Linguistics |
Online Access: | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00410/107384/Measuring-and-Improving-Consistency-in-Pretrained |
Similar Items
-
Erratum: Measuring and Improving Consistency in Pretrained Language
Models
by: Yanai Elazar, et al.
Published: (2022-01-01) -
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals
by: Yanai Elazar, et al.
Published: (2021-01-01) -
Geographic Adaptation of Pretrained Language Models
by: Valentin Hofmann, et al.
Published: (2024-04-01) -
Explaining pretrained language models' understanding of linguistic structures using construction grammar
by: Leonie Weissweiler, et al.
Published: (2023-10-01) -
Geographic adaptation of pretrained language models
by: Hofmann, V, et al.
Published: (2024)