Data-efficient domain adaptation for pretrained language models
Recent advances in Natural Language Processing (NLP) are built on a range of large-scale pretrained language models (PLMs), which are based on deep transformer neural networks. These PLMs simultaneously learn contextualized word representations and language modeling by training the entire model on m...
Autor principal: | Guo, Xu |
---|---|
Altres autors: | Yu Han |
Format: | Thesis-Doctor of Philosophy |
Idioma: | English |
Publicat: |
Nanyang Technological University
2023
|
Matèries: | |
Accés en línia: | https://hdl.handle.net/10356/167965 |
Ítems similars
-
Extracting event knowledge from pretrained language models
per: Ong, Claudia Beth
Publicat: (2023) -
Language model domain adaptation for automatic speech recognition systems
per: Khassanov, Yerbolat
Publicat: (2020) -
Code problem similarity detection using code clones and pretrained models
per: Yeo, Geremie Yun Siang
Publicat: (2023) -
Language Modeling for limited-data domains
per: Hsu, Bo-June (Bo-June Paul)
Publicat: (2010) -
Geographic adaptation of pretrained language models
per: Hofmann, V, et al.
Publicat: (2024)