SentiBERT: Pre-training Language Model Combining Sentiment Information

Pre-training language models on large-scale unsupervised corpus are attracting the attention of researchers in the field of natural language processing. The existing model mainly extracts the semantic and structural features of the text in the pre-training stage. Aiming at sentiment task and complex...

Full description

Bibliographic Details
Main Author: YANG Chen, SONG Xiaoning, SONG Wei
Format: Article
Language:zho
Published: Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press 2020-09-01
Series:Jisuanji kexue yu tansuo
Subjects:
Online Access:http://fcst.ceaj.org/CN/abstract/abstract2361.shtml