MoCoUTRL: a momentum contrastive framework for unsupervised text representation learning

This paper presents MoCoUTRL: a Momentum Contrastive Framework for Unsupervised Text Representation Learning. This model improves two aspects of recently popular contrastive learning algorithms in natural language processing (NLP). Firstly, MoCoUTRL employs multi-granularity semantic contrastive lea...

Full description

Bibliographic Details
Main Authors: Ao Zou, Wenning Hao, Dawei Jin, Gang Chen, Feiyan Sun
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Connection Science
Subjects:
Online Access:http://dx.doi.org/10.1080/09540091.2023.2221406