Superbizarre is not superb: derivational morphology improves BERT’s interpretation of complex words

How does the input segmentation of pretrained language models (PLMs) affect their interpretations of complex words? We present the first study investigating this question, taking BERT as the example PLM and focusing on its semantic representations of English derivatives. We show that PLMs can be int...

Full description

Bibliographic Details
Main Authors: Hofmann, V, Pierrehumbert, JB, Schuetze, H
Format: Conference item
Language:English
Published: Association for Computational Linguistics 2021