Let’s Play <tt>Mono</tt>-<tt>Poly</tt>: BERT Can Reveal Words’ Polysemy Level and Partitionability into Senses

Pre-trained language models (LMs) encode rich information about linguistic structure but their knowledge about lexical polysemy remains unclear. We propose a novel experimental setup for analyzing this knowledge in LMs specifically trained for different languages (English, French, Sp...

Full description

Bibliographic Details
Main Authors: Aina Garí Soler, Marianna Apidianaki
Format: Article
Language:English
Published: The MIT Press 2021-01-01
Series:Transactions of the Association for Computational Linguistics
Online Access:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00400/106797/Let-s-Play-Mono-Poly-BERT-Can-Reveal-Words