Lexicon‐based fine‐tuning of multilingual language models for low‐resource language sentiment analysis

Abstract Pre‐trained multilingual language models (PMLMs) such as mBERT and XLM‐R have shown good cross‐lingual transferability. However, they are not specifically trained to capture cross‐lingual signals concerning sentiment words. This poses a disadvantage for low‐resource languages (LRLs) that ar...

Full description

Bibliographic Details
Main Authors: Vinura Dhananjaya, Surangika Ranathunga, Sanath Jayasena
Format: Article
Language:English
Published: Wiley 2024-10-01
Series:CAAI Transactions on Intelligence Technology
Subjects:
Online Access:https://doi.org/10.1049/cit2.12333