Superbizarre is not superb: derivational morphology improves BERT’s interpretation of complex words

How does the input segmentation of pretrained language models (PLMs) affect their interpretations of complex words? We present the first study investigating this question, taking BERT as the example PLM and focusing on its semantic representations of English derivatives. We show that PLMs can be int...

ver descrição completa

Detalhes bibliográficos
Principais autores: Hofmann, V, Pierrehumbert, JB, Schuetze, H
Formato: Conference item
Idioma:English
Publicado em: Association for Computational Linguistics 2021