From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough
With the recent advances in deep learning, different approaches to improving pre-trained language models (PLMs) have been proposed. PLMs have advanced state-of-the-art (SOTA) performance on various natural language processing (NLP) tasks such as machine translation, text classification, question ans...
Main Author: | Mourad Mars |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-09-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/12/17/8805 |
Similar Items
-
Pre-Training MLM Using Bert for the Albanian Language
by: Kryeziu Labehat, et al.
Published: (2023-06-01) -
Comparison of pre-trained language models in terms of carbon emissions, time and accuracy in multi-label text classification using AutoML
by: Pinar Savci, et al.
Published: (2023-05-01) -
A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges
by: Mohaimenul Azam Khan Raiaan, et al.
Published: (2024-01-01) -
The Potential of Natural Language Technology in Transforming Educational Processes
by: Adrian LUPASC
Published: (2023-12-01) -
Survey: Transformer based video-language pre-training
by: Ludan Ruan, et al.
Published: (2022-01-01)