Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study

BackgroundLarge language models (LLMs) have gained prominence since the release of ChatGPT in late 2022. ObjectiveThe aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natu...

Full description

Bibliographic Details
Main Authors: Joseph Mugaanyi, Liuying Cai, Sumei Cheng, Caide Lu, Jing Huang
Format: Article
Language:English
Published: JMIR Publications 2024-04-01
Series:Journal of Medical Internet Research
Online Access:https://www.jmir.org/2024/1/e52935
_version_ 1797221114737328128
author Joseph Mugaanyi
Liuying Cai
Sumei Cheng
Caide Lu
Jing Huang
author_facet Joseph Mugaanyi
Liuying Cai
Sumei Cheng
Caide Lu
Jing Huang
author_sort Joseph Mugaanyi
collection DOAJ
description BackgroundLarge language models (LLMs) have gained prominence since the release of ChatGPT in late 2022. ObjectiveThe aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities. MethodsTwo researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines. ResultsTen topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy. ConclusionsChatGPT’s performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.
first_indexed 2024-04-24T13:00:18Z
format Article
id doaj.art-f0dc8bb2434341dc8bd2efa6cec886ce
institution Directory Open Access Journal
issn 1438-8871
language English
last_indexed 2024-04-24T13:00:18Z
publishDate 2024-04-01
publisher JMIR Publications
record_format Article
series Journal of Medical Internet Research
spelling doaj.art-f0dc8bb2434341dc8bd2efa6cec886ce2024-04-05T14:00:33ZengJMIR PublicationsJournal of Medical Internet Research1438-88712024-04-0126e5293510.2196/52935Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary StudyJoseph Mugaanyihttps://orcid.org/0000-0003-1688-5475Liuying Caihttps://orcid.org/0009-0005-2648-1839Sumei Chenghttps://orcid.org/0009-0000-3638-4171Caide Luhttps://orcid.org/0000-0001-9588-2218Jing Huanghttps://orcid.org/0000-0003-3245-3605 BackgroundLarge language models (LLMs) have gained prominence since the release of ChatGPT in late 2022. ObjectiveThe aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities. MethodsTwo researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines. ResultsTen topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy. ConclusionsChatGPT’s performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.https://www.jmir.org/2024/1/e52935
spellingShingle Joseph Mugaanyi
Liuying Cai
Sumei Cheng
Caide Lu
Jing Huang
Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
Journal of Medical Internet Research
title Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
title_full Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
title_fullStr Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
title_full_unstemmed Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
title_short Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study
title_sort evaluation of large language model performance and reliability for citations and references in scholarly writing cross disciplinary study
url https://www.jmir.org/2024/1/e52935
work_keys_str_mv AT josephmugaanyi evaluationoflargelanguagemodelperformanceandreliabilityforcitationsandreferencesinscholarlywritingcrossdisciplinarystudy
AT liuyingcai evaluationoflargelanguagemodelperformanceandreliabilityforcitationsandreferencesinscholarlywritingcrossdisciplinarystudy
AT sumeicheng evaluationoflargelanguagemodelperformanceandreliabilityforcitationsandreferencesinscholarlywritingcrossdisciplinarystudy
AT caidelu evaluationoflargelanguagemodelperformanceandreliabilityforcitationsandreferencesinscholarlywritingcrossdisciplinarystudy
AT jinghuang evaluationoflargelanguagemodelperformanceandreliabilityforcitationsandreferencesinscholarlywritingcrossdisciplinarystudy