Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects

Transformer models have emerged as frontrunners in the field of natural language processing, primarily due to their adept use of self-attention mechanisms to grasp the semantic linkages between words in sequences. Despite their strengths, these models often face challenges in single-task learning sc...

Full description

Bibliographic Details
Main Authors: Laith H. Baniata, Sangwoo Kang
Format: Article
Language:English
Published: MDPI AG 2024-03-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/12/6/865
_version_ 1797240169771827200
author Laith H. Baniata
Sangwoo Kang
author_facet Laith H. Baniata
Sangwoo Kang
author_sort Laith H. Baniata
collection DOAJ
description Transformer models have emerged as frontrunners in the field of natural language processing, primarily due to their adept use of self-attention mechanisms to grasp the semantic linkages between words in sequences. Despite their strengths, these models often face challenges in single-task learning scenarios, particularly when it comes to delivering top-notch performance and crafting strong latent feature representations. This challenge is more pronounced in the context of smaller datasets and is particularly acute for under-resourced languages such as Arabic. In light of these challenges, this study introduces a novel methodology for text classification of Arabic texts. This method harnesses the newly developed Reverse Positional Encoding (RPE) technique. It adopts an inductive-transfer learning (ITL) framework combined with a switching self-attention shared encoder, thereby increasing the model’s adaptability and improving its sentence representation accuracy. The integration of Mixture of Experts (MoE) and RPE techniques empowers the model to process longer sequences more effectively. This enhancement is notably beneficial for Arabic text classification, adeptly supporting both the intricate five-point and the simpler ternary classification tasks. The empirical evidence points to its outstanding performance, achieving accuracy rates of 87.20% for the HARD dataset, 72.17% for the BRAD dataset, and 86.89% for the LABR dataset, as evidenced by the assessments conducted on these datasets.
first_indexed 2024-04-24T18:03:10Z
format Article
id doaj.art-e7bc1bc83f154c18b5f1b06af7d6727d
institution Directory Open Access Journal
issn 2227-7390
language English
last_indexed 2024-04-24T18:03:10Z
publishDate 2024-03-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj.art-e7bc1bc83f154c18b5f1b06af7d6727d2024-03-27T13:53:09ZengMDPI AGMathematics2227-73902024-03-0112686510.3390/math12060865Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic DialectsLaith H. Baniata0Sangwoo Kang1School of Computing, Gachon University, Seongnam 13120, Republic of KoreaSchool of Computing, Gachon University, Seongnam 13120, Republic of KoreaTransformer models have emerged as frontrunners in the field of natural language processing, primarily due to their adept use of self-attention mechanisms to grasp the semantic linkages between words in sequences. Despite their strengths, these models often face challenges in single-task learning scenarios, particularly when it comes to delivering top-notch performance and crafting strong latent feature representations. This challenge is more pronounced in the context of smaller datasets and is particularly acute for under-resourced languages such as Arabic. In light of these challenges, this study introduces a novel methodology for text classification of Arabic texts. This method harnesses the newly developed Reverse Positional Encoding (RPE) technique. It adopts an inductive-transfer learning (ITL) framework combined with a switching self-attention shared encoder, thereby increasing the model’s adaptability and improving its sentence representation accuracy. The integration of Mixture of Experts (MoE) and RPE techniques empowers the model to process longer sequences more effectively. This enhancement is notably beneficial for Arabic text classification, adeptly supporting both the intricate five-point and the simpler ternary classification tasks. The empirical evidence points to its outstanding performance, achieving accuracy rates of 87.20% for the HARD dataset, 72.17% for the BRAD dataset, and 86.89% for the LABR dataset, as evidenced by the assessments conducted on these datasets.https://www.mdpi.com/2227-7390/12/6/865switching self-attentionreverse positional encoding (RPE) mothedtext classification (TC)right-to-left textfive-polarityITL
spellingShingle Laith H. Baniata
Sangwoo Kang
Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects
Mathematics
switching self-attention
reverse positional encoding (RPE) mothed
text classification (TC)
right-to-left text
five-polarity
ITL
title Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects
title_full Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects
title_fullStr Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects
title_full_unstemmed Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects
title_short Switching Self-Attention Text Classification Model with Innovative Reverse Positional Encoding for Right-to-Left Languages: A Focus on Arabic Dialects
title_sort switching self attention text classification model with innovative reverse positional encoding for right to left languages a focus on arabic dialects
topic switching self-attention
reverse positional encoding (RPE) mothed
text classification (TC)
right-to-left text
five-polarity
ITL
url https://www.mdpi.com/2227-7390/12/6/865
work_keys_str_mv AT laithhbaniata switchingselfattentiontextclassificationmodelwithinnovativereversepositionalencodingforrighttoleftlanguagesafocusonarabicdialects
AT sangwookang switchingselfattentiontextclassificationmodelwithinnovativereversepositionalencodingforrighttoleftlanguagesafocusonarabicdialects