Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study

BackgroundLarge language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of...

Full description

Bibliographic Details
Main Authors: Arya Rao, Michael Pang, John Kim, Meghana Kamineni, Winston Lie, Anoop K Prasad, Adam Landman, Keith Dreyer, Marc D Succi
Format: Article
Language:English
Published: JMIR Publications 2023-08-01
Series:Journal of Medical Internet Research
Online Access:https://www.jmir.org/2023/1/e48659
_version_ 1797739349091024896
author Arya Rao
Michael Pang
John Kim
Meghana Kamineni
Winston Lie
Anoop K Prasad
Adam Landman
Keith Dreyer
Marc D Succi
author_facet Arya Rao
Michael Pang
John Kim
Meghana Kamineni
Winston Lie
Anoop K Prasad
Adam Landman
Keith Dreyer
Marc D Succi
author_sort Arya Rao
collection DOAJ
description BackgroundLarge language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. ObjectiveThis study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. MethodsWe inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. ResultsChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types. ConclusionsChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set.
first_indexed 2024-03-12T13:56:25Z
format Article
id doaj.art-f869ff219b85434bb11f72056d0e4302
institution Directory Open Access Journal
issn 1438-8871
language English
last_indexed 2024-03-12T13:56:25Z
publishDate 2023-08-01
publisher JMIR Publications
record_format Article
series Journal of Medical Internet Research
spelling doaj.art-f869ff219b85434bb11f72056d0e43022023-08-22T14:46:55ZengJMIR PublicationsJournal of Medical Internet Research1438-88712023-08-0125e4865910.2196/48659Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability StudyArya Raohttps://orcid.org/0000-0003-3007-4812Michael Panghttps://orcid.org/0000-0001-5619-9344John Kimhttps://orcid.org/0000-0003-4252-5916Meghana Kaminenihttps://orcid.org/0000-0002-6698-5151Winston Liehttps://orcid.org/0009-0002-0939-7449Anoop K Prasadhttps://orcid.org/0000-0002-4409-6062Adam Landmanhttps://orcid.org/0000-0002-2166-0521Keith Dreyerhttps://orcid.org/0000-0003-1207-6443Marc D Succihttps://orcid.org/0000-0002-1518-3984 BackgroundLarge language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. ObjectiveThis study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. MethodsWe inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. ResultsChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types. ConclusionsChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set.https://www.jmir.org/2023/1/e48659
spellingShingle Arya Rao
Michael Pang
John Kim
Meghana Kamineni
Winston Lie
Anoop K Prasad
Adam Landman
Keith Dreyer
Marc D Succi
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
Journal of Medical Internet Research
title Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_full Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_fullStr Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_full_unstemmed Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_short Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_sort assessing the utility of chatgpt throughout the entire clinical workflow development and usability study
url https://www.jmir.org/2023/1/e48659
work_keys_str_mv AT aryarao assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT michaelpang assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT johnkim assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT meghanakamineni assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT winstonlie assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT anoopkprasad assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT adamlandman assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT keithdreyer assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT marcdsucci assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy