SpanBERT: Improving Pre-training by Representing and Predicting Spans

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span,...

وصف كامل

التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: Joshi, Mandar, Chen, Danqi, Liu, Yinhan, Weld, Daniel S., Zettlemoyer, Luke, Levy, Omer
التنسيق: مقال
اللغة:English
منشور في: The MIT Press 2020-07-01
سلاسل:Transactions of the Association for Computational Linguistics
الوصول للمادة أونلاين:https://www.mitpressjournals.org/doi/abs/10.1162/tacl_a_00300