Measuring and Improving Consistency in Pretrained Language Models
AbstractConsistency of a model—that is, the invariance of its behavior under meaning-preserving alternations in its input—is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect...
Huvudupphovsmän: | , , , , , , |
---|---|
Materialtyp: | Artikel |
Språk: | English |
Publicerad: |
The MIT Press
2021-01-01
|
Serie: | Transactions of the Association for Computational Linguistics |
Länkar: | https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00410/107384/Measuring-and-Improving-Consistency-in-Pretrained |