Harnessing the open access version of ChatGPT for enhanced clinical opinions.

With the advent of Large Language Models (LLMs) like ChatGPT, the integration of Generative Artificial Intelligence (GAI) into clinical medicine is becoming increasingly feasible. This study aimed to evaluate the ability of the freely available ChatGPT-3.5 to generate complex differential diagnoses,...

Full description

Bibliographic Details
Main Authors: Zachary M Tenner, Michael C Cottone, Martin R Chavez
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2024-02-01
Series:PLOS Digital Health
Online Access:https://doi.org/10.1371/journal.pdig.0000355
_version_ 1827356677972688896
author Zachary M Tenner
Michael C Cottone
Martin R Chavez
author_facet Zachary M Tenner
Michael C Cottone
Martin R Chavez
author_sort Zachary M Tenner
collection DOAJ
description With the advent of Large Language Models (LLMs) like ChatGPT, the integration of Generative Artificial Intelligence (GAI) into clinical medicine is becoming increasingly feasible. This study aimed to evaluate the ability of the freely available ChatGPT-3.5 to generate complex differential diagnoses, comparing its output to case records of the Massachusetts General Hospital published in the New England Journal of Medicine (NEJM). Forty case records were presented to ChatGPT-3.5, prompting it to provide a differential diagnosis and then narrow it down to the most likely diagnosis. The results indicated that the final diagnosis was included in ChatGPT-3.5's original differential list in 42.5% of the cases. After narrowing, ChatGPT correctly determined the final diagnosis in 27.5% of the cases, demonstrating a decrease in accuracy compared to previous studies using common chief complaints. These findings emphasize the necessity for further investigation into the capabilities and limitations of LLMs in clinical scenarios while highlighting the potential role of GAI as an augmented clinical opinion. Anticipating the growth and enhancement of GAI tools like ChatGPT, physicians and other healthcare workers will likely find increasing support in generating differential diagnoses. However, continued exploration and regulation are essential to ensure the safe and effective integration of GAI into healthcare practice. Future studies may seek to compare newer versions of ChatGPT or investigate patient outcomes with physicians integrating this GAI technology. Understanding and expanding GAI's capabilities, particularly in differential diagnosis, may foster innovation and provide additional resources, especially in underserved areas in the medical field.
first_indexed 2024-03-08T05:12:22Z
format Article
id doaj.art-a7d0327bfd0d40f999a5caad0018fa18
institution Directory Open Access Journal
issn 2767-3170
language English
last_indexed 2024-03-08T05:12:22Z
publishDate 2024-02-01
publisher Public Library of Science (PLoS)
record_format Article
series PLOS Digital Health
spelling doaj.art-a7d0327bfd0d40f999a5caad0018fa182024-02-07T05:31:40ZengPublic Library of Science (PLoS)PLOS Digital Health2767-31702024-02-0132e000035510.1371/journal.pdig.0000355Harnessing the open access version of ChatGPT for enhanced clinical opinions.Zachary M TennerMichael C CottoneMartin R ChavezWith the advent of Large Language Models (LLMs) like ChatGPT, the integration of Generative Artificial Intelligence (GAI) into clinical medicine is becoming increasingly feasible. This study aimed to evaluate the ability of the freely available ChatGPT-3.5 to generate complex differential diagnoses, comparing its output to case records of the Massachusetts General Hospital published in the New England Journal of Medicine (NEJM). Forty case records were presented to ChatGPT-3.5, prompting it to provide a differential diagnosis and then narrow it down to the most likely diagnosis. The results indicated that the final diagnosis was included in ChatGPT-3.5's original differential list in 42.5% of the cases. After narrowing, ChatGPT correctly determined the final diagnosis in 27.5% of the cases, demonstrating a decrease in accuracy compared to previous studies using common chief complaints. These findings emphasize the necessity for further investigation into the capabilities and limitations of LLMs in clinical scenarios while highlighting the potential role of GAI as an augmented clinical opinion. Anticipating the growth and enhancement of GAI tools like ChatGPT, physicians and other healthcare workers will likely find increasing support in generating differential diagnoses. However, continued exploration and regulation are essential to ensure the safe and effective integration of GAI into healthcare practice. Future studies may seek to compare newer versions of ChatGPT or investigate patient outcomes with physicians integrating this GAI technology. Understanding and expanding GAI's capabilities, particularly in differential diagnosis, may foster innovation and provide additional resources, especially in underserved areas in the medical field.https://doi.org/10.1371/journal.pdig.0000355
spellingShingle Zachary M Tenner
Michael C Cottone
Martin R Chavez
Harnessing the open access version of ChatGPT for enhanced clinical opinions.
PLOS Digital Health
title Harnessing the open access version of ChatGPT for enhanced clinical opinions.
title_full Harnessing the open access version of ChatGPT for enhanced clinical opinions.
title_fullStr Harnessing the open access version of ChatGPT for enhanced clinical opinions.
title_full_unstemmed Harnessing the open access version of ChatGPT for enhanced clinical opinions.
title_short Harnessing the open access version of ChatGPT for enhanced clinical opinions.
title_sort harnessing the open access version of chatgpt for enhanced clinical opinions
url https://doi.org/10.1371/journal.pdig.0000355
work_keys_str_mv AT zacharymtenner harnessingtheopenaccessversionofchatgptforenhancedclinicalopinions
AT michaelccottone harnessingtheopenaccessversionofchatgptforenhancedclinicalopinions
AT martinrchavez harnessingtheopenaccessversionofchatgptforenhancedclinicalopinions