A bilingual benchmark for evaluating large language models

This work introduces a new benchmark for the bilingual evaluation of large language models (LLMs) in English and Arabic. While LLMs have transformed various fields, their evaluation in Arabic remains limited. This work addresses this gap by proposing a novel evaluation method for LLMs in both Arabic...

Full description

Bibliographic Details
Main Author: Mohamed Alkaoud
Format: Article
Language:English
Published: PeerJ Inc. 2024-02-01
Series:PeerJ Computer Science
Subjects:
Online Access:https://peerj.com/articles/cs-1893.pdf
_version_ 1797284200581169152
author Mohamed Alkaoud
author_facet Mohamed Alkaoud
author_sort Mohamed Alkaoud
collection DOAJ
description This work introduces a new benchmark for the bilingual evaluation of large language models (LLMs) in English and Arabic. While LLMs have transformed various fields, their evaluation in Arabic remains limited. This work addresses this gap by proposing a novel evaluation method for LLMs in both Arabic and English, allowing for a direct comparison between the performance of the two languages. We build a new evaluation dataset based on the General Aptitude Test (GAT), a standardized test widely used for university admissions in the Arab world, that we utilize to measure the linguistic capabilities of LLMs. We conduct several experiments to examine the linguistic capabilities of ChatGPT and quantify how much better it is at English than Arabic. We also examine the effect of changing task descriptions from Arabic to English and vice-versa. In addition to that, we find that fastText can surpass ChatGPT in finding Arabic word analogies. We conclude by showing that GPT-4 Arabic linguistic capabilities are much better than ChatGPT’s Arabic capabilities and are close to ChatGPT’s English capabilities.
first_indexed 2024-03-07T17:44:33Z
format Article
id doaj.art-230cb21e6d7e4a15b555b6956a5a1186
institution Directory Open Access Journal
issn 2376-5992
language English
last_indexed 2024-03-07T17:44:33Z
publishDate 2024-02-01
publisher PeerJ Inc.
record_format Article
series PeerJ Computer Science
spelling doaj.art-230cb21e6d7e4a15b555b6956a5a11862024-03-02T15:05:35ZengPeerJ Inc.PeerJ Computer Science2376-59922024-02-0110e189310.7717/peerj-cs.1893A bilingual benchmark for evaluating large language modelsMohamed Alkaoud0Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi ArabiaThis work introduces a new benchmark for the bilingual evaluation of large language models (LLMs) in English and Arabic. While LLMs have transformed various fields, their evaluation in Arabic remains limited. This work addresses this gap by proposing a novel evaluation method for LLMs in both Arabic and English, allowing for a direct comparison between the performance of the two languages. We build a new evaluation dataset based on the General Aptitude Test (GAT), a standardized test widely used for university admissions in the Arab world, that we utilize to measure the linguistic capabilities of LLMs. We conduct several experiments to examine the linguistic capabilities of ChatGPT and quantify how much better it is at English than Arabic. We also examine the effect of changing task descriptions from Arabic to English and vice-versa. In addition to that, we find that fastText can surpass ChatGPT in finding Arabic word analogies. We conclude by showing that GPT-4 Arabic linguistic capabilities are much better than ChatGPT’s Arabic capabilities and are close to ChatGPT’s English capabilities.https://peerj.com/articles/cs-1893.pdfNatural language processingLarge language modelsMultilingual NLPLLM evaluationArabic NLPChatGPT
spellingShingle Mohamed Alkaoud
A bilingual benchmark for evaluating large language models
PeerJ Computer Science
Natural language processing
Large language models
Multilingual NLP
LLM evaluation
Arabic NLP
ChatGPT
title A bilingual benchmark for evaluating large language models
title_full A bilingual benchmark for evaluating large language models
title_fullStr A bilingual benchmark for evaluating large language models
title_full_unstemmed A bilingual benchmark for evaluating large language models
title_short A bilingual benchmark for evaluating large language models
title_sort bilingual benchmark for evaluating large language models
topic Natural language processing
Large language models
Multilingual NLP
LLM evaluation
Arabic NLP
ChatGPT
url https://peerj.com/articles/cs-1893.pdf
work_keys_str_mv AT mohamedalkaoud abilingualbenchmarkforevaluatinglargelanguagemodels
AT mohamedalkaoud bilingualbenchmarkforevaluatinglargelanguagemodels