A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data

BackgroundHealth care providers and health-related researchers face significant challenges when applying sentiment analysis tools to health-related free-text survey data. Most state-of-the-art applications were developed in domains such as social media, and their performance...

Full description

Bibliographic Details
Main Authors: Juan Antonio Lossio-Ventura, Rachel Weger, Angela Y Lee, Emily P Guinee, Joyce Chung, Lauren Atlas, Eleni Linos, Francisco Pereira
Format: Article
Language:English
Published: JMIR Publications 2024-01-01
Series:JMIR Mental Health
Online Access:https://mental.jmir.org/2024/1/e50150
_version_ 1797346495407587328
author Juan Antonio Lossio-Ventura
Rachel Weger
Angela Y Lee
Emily P Guinee
Joyce Chung
Lauren Atlas
Eleni Linos
Francisco Pereira
author_facet Juan Antonio Lossio-Ventura
Rachel Weger
Angela Y Lee
Emily P Guinee
Joyce Chung
Lauren Atlas
Eleni Linos
Francisco Pereira
author_sort Juan Antonio Lossio-Ventura
collection DOAJ
description BackgroundHealth care providers and health-related researchers face significant challenges when applying sentiment analysis tools to health-related free-text survey data. Most state-of-the-art applications were developed in domains such as social media, and their performance in the health care context remains relatively unknown. Moreover, existing studies indicate that these tools often lack accuracy and produce inconsistent results. ObjectiveThis study aims to address the lack of comparative analysis on sentiment analysis tools applied to health-related free-text survey data in the context of COVID-19. The objective was to automatically predict sentence sentiment for 2 independent COVID-19 survey data sets from the National Institutes of Health and Stanford University. MethodsGold standard labels were created for a subset of each data set using a panel of human raters. We compared 8 state-of-the-art sentiment analysis tools on both data sets to evaluate variability and disagreement across tools. In addition, few-shot learning was explored by fine-tuning Open Pre-Trained Transformers (OPT; a large language model [LLM] with publicly available weights) using a small annotated subset and zero-shot learning using ChatGPT (an LLM without available weights). ResultsThe comparison of sentiment analysis tools revealed high variability and disagreement across the evaluated tools when applied to health-related survey data. OPT and ChatGPT demonstrated superior performance, outperforming all other sentiment analysis tools. Moreover, ChatGPT outperformed OPT, exhibited higher accuracy by 6% and higher F-measure by 4% to 7%. ConclusionsThis study demonstrates the effectiveness of LLMs, particularly the few-shot learning and zero-shot learning approaches, in the sentiment analysis of health-related survey data. These results have implications for saving human labor and improving efficiency in sentiment analysis tasks, contributing to advancements in the field of automated sentiment analysis.
first_indexed 2024-03-08T11:34:19Z
format Article
id doaj.art-6623c9352c2b4ed4b01f6a660b595529
institution Directory Open Access Journal
issn 2368-7959
language English
last_indexed 2024-03-08T11:34:19Z
publishDate 2024-01-01
publisher JMIR Publications
record_format Article
series JMIR Mental Health
spelling doaj.art-6623c9352c2b4ed4b01f6a660b5955292024-01-25T15:00:36ZengJMIR PublicationsJMIR Mental Health2368-79592024-01-0111e5015010.2196/50150A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey DataJuan Antonio Lossio-Venturahttps://orcid.org/0000-0003-0996-2356Rachel Wegerhttps://orcid.org/0000-0003-0897-9658Angela Y Leehttps://orcid.org/0000-0002-9527-5730Emily P Guineehttps://orcid.org/0009-0002-5938-7003Joyce Chunghttps://orcid.org/0000-0001-8255-7440Lauren Atlashttps://orcid.org/0000-0001-5693-4169Eleni Linoshttps://orcid.org/0000-0002-5856-6301Francisco Pereirahttps://orcid.org/0000-0003-2773-3426 BackgroundHealth care providers and health-related researchers face significant challenges when applying sentiment analysis tools to health-related free-text survey data. Most state-of-the-art applications were developed in domains such as social media, and their performance in the health care context remains relatively unknown. Moreover, existing studies indicate that these tools often lack accuracy and produce inconsistent results. ObjectiveThis study aims to address the lack of comparative analysis on sentiment analysis tools applied to health-related free-text survey data in the context of COVID-19. The objective was to automatically predict sentence sentiment for 2 independent COVID-19 survey data sets from the National Institutes of Health and Stanford University. MethodsGold standard labels were created for a subset of each data set using a panel of human raters. We compared 8 state-of-the-art sentiment analysis tools on both data sets to evaluate variability and disagreement across tools. In addition, few-shot learning was explored by fine-tuning Open Pre-Trained Transformers (OPT; a large language model [LLM] with publicly available weights) using a small annotated subset and zero-shot learning using ChatGPT (an LLM without available weights). ResultsThe comparison of sentiment analysis tools revealed high variability and disagreement across the evaluated tools when applied to health-related survey data. OPT and ChatGPT demonstrated superior performance, outperforming all other sentiment analysis tools. Moreover, ChatGPT outperformed OPT, exhibited higher accuracy by 6% and higher F-measure by 4% to 7%. ConclusionsThis study demonstrates the effectiveness of LLMs, particularly the few-shot learning and zero-shot learning approaches, in the sentiment analysis of health-related survey data. These results have implications for saving human labor and improving efficiency in sentiment analysis tasks, contributing to advancements in the field of automated sentiment analysis.https://mental.jmir.org/2024/1/e50150
spellingShingle Juan Antonio Lossio-Ventura
Rachel Weger
Angela Y Lee
Emily P Guinee
Joyce Chung
Lauren Atlas
Eleni Linos
Francisco Pereira
A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data
JMIR Mental Health
title A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data
title_full A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data
title_fullStr A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data
title_full_unstemmed A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data
title_short A Comparison of ChatGPT and Fine-Tuned Open Pre-Trained Transformers (OPT) Against Widely Used Sentiment Analysis Tools: Sentiment Analysis of COVID-19 Survey Data
title_sort comparison of chatgpt and fine tuned open pre trained transformers opt against widely used sentiment analysis tools sentiment analysis of covid 19 survey data
url https://mental.jmir.org/2024/1/e50150
work_keys_str_mv AT juanantoniolossioventura acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT rachelweger acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT angelaylee acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT emilypguinee acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT joycechung acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT laurenatlas acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT elenilinos acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT franciscopereira acomparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT juanantoniolossioventura comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT rachelweger comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT angelaylee comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT emilypguinee comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT joycechung comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT laurenatlas comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT elenilinos comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata
AT franciscopereira comparisonofchatgptandfinetunedopenpretrainedtransformersoptagainstwidelyusedsentimentanalysistoolssentimentanalysisofcovid19surveydata