Measuring and Improving Consistency in Pretrained Language Models

AbstractConsistency of a model—that is, the invariance of its behavior under meaning-preserving alternations in its input—is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect...

全面介绍

书目详细资料
Main Authors: Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg
格式: 文件
语言:English
出版: The MIT Press 2021-01-01
丛编:Transactions of the Association for Computational Linguistics
在线阅读:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00410/107384/Measuring-and-Improving-Consistency-in-Pretrained