A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning
Transfer learning is a technique utilized in deep learning applications to transmit learned inference to a different target domain. The approach is mainly to solve the problem of a few training datasets resulting in model overfitting, which affects model performance. The study was carried out on pub...
Main Authors: | Evans Kotei, Ramkumar Thirunavukarasu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-03-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/14/3/187 |
Similar Items
-
ESG2PreEM: Automated ESG grade assessment framework using pre-trained ensemble models
by: Haein Lee, et al.
Published: (2024-02-01) -
On solving textual ambiguities and semantic vagueness in MRC based question answering using generative pre-trained transformers
by: Muzamil Ahmed, et al.
Published: (2023-07-01) -
Research Progress on Vision–Language Multimodal Pretraining Model Technology
by: Huansha Wang, et al.
Published: (2022-10-01) -
Sequence-to-sequence pretraining for a less-resourced Slovenian language
by: Matej Ulčar, et al.
Published: (2023-03-01) -
Pre-trained transformer-based language models for Sundanese
by: Wilson Wongso, et al.
Published: (2022-04-01)