Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects

Transformer models have emerged as frontrunners in the field of natural language processing, primarily due to their adept use of self-attention mechanisms to grasp the semantic linkages between words in sequences. Despite their strengths, these models often face challenges in single-task learning sc...

Full description

Bibliographic Details
Main Authors: Laith H. Baniata, Sangwoo Kang
Format: Article
Language:English
Published: MDPI AG 2024-03-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/12/6/865