Data-efficient domain adaptation for pretrained language models

Recent advances in Natural Language Processing (NLP) are built on a range of large-scale pretrained language models (PLMs), which are based on deep transformer neural networks. These PLMs simultaneously learn contextualized word representations and language modeling by training the entire model on m...

Descripció completa

Dades bibliogràfiques
Autor principal: Guo, Xu
Altres autors: Yu Han
Format: Thesis-Doctor of Philosophy
Idioma:English
Publicat: Nanyang Technological University 2023
Matèries:
Accés en línia:https://hdl.handle.net/10356/167965

Ítems similars