What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, whic...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2020-07-01
|
Series: | Transactions of the Association for Computational Linguistics |
Online Access: | https://www.mitpressjournals.org/doi/abs/10.1162/tacl_a_00298 |