Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam

Introduction: Artificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research question: to assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) perfor...

Full description

Bibliographic Details
Main Authors: A. Bartoli, A.T. May, A. Al-Awadhi, K. Schaller
Format: Article
Language:English
Published: Elsevier 2024-01-01
Series:Brain and Spine
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2772529423010032
_version_ 1797404616787230720
author A. Bartoli
A.T. May
A. Al-Awadhi
K. Schaller
author_facet A. Bartoli
A.T. May
A. Al-Awadhi
K. Schaller
author_sort A. Bartoli
collection DOAJ
description Introduction: Artificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research question: to assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) performs at both generating questions and answering a neurosurgical residents’ written exam. Namely, to assess how ChatGPT generates questions, how it answers human-generated questions, how residents answer AI-generated questions and how AI answers its self-generated question. Materials and methods: 50 questions were included in the written exam, 46 questions were generated by humans (senior staff members) and 4 were generated by ChatGPT. 11 participants took the exam (ChatGPT and 10 residents). Questions were both open-ended and multiple-choice.8 questions were not submitted to ChatGPT since they contained images or schematic drawings to interpret. Results: formulating requests to ChatGPT required an iterative process to precise both questions and answers. Chat GPT scored among the lowest ranks (9/11) among all the participants). There was no difference in response rate for residents’ between human-generated vs AI-generated questions that could have been attributed to less clarity of the question. ChatGPT answered correctly to all its self-generated questions. Discussion and conclusions: AI is a promising and powerful tool for medical education and for specific medical purposes, which need to be further determined. To request AI to generate logical and sound questions, that request must be formulated as precise as possible, framing the content, the type of question and its correct answers.
first_indexed 2024-03-09T02:57:37Z
format Article
id doaj.art-ffef4f603d1643cabaff6521f5ee798d
institution Directory Open Access Journal
issn 2772-5294
language English
last_indexed 2024-03-09T02:57:37Z
publishDate 2024-01-01
publisher Elsevier
record_format Article
series Brain and Spine
spelling doaj.art-ffef4f603d1643cabaff6521f5ee798d2023-12-05T04:15:58ZengElsevierBrain and Spine2772-52942024-01-014102715Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written examA. Bartoli0A.T. May1A. Al-Awadhi2K. Schaller3Department of Clinical Neurosciences, Division of Neurosurgery, Geneva University Medical Center, Geneva, Switzerland; Switzerland & Faculty of Medicine, University of Geneva, Switzerland; Corresponding author. Switzerland & Faculty of Medicine, University of Geneva, Switzerland.Department of Clinical Neurosciences, Division of Neurosurgery, Geneva University Medical Center, Geneva, Switzerland; Switzerland & Faculty of Medicine, University of Geneva, SwitzerlandDepartment of Clinical Neurosciences, Division of Neurosurgery, Geneva University Medical Center, Geneva, Switzerland; Switzerland & Faculty of Medicine, University of Geneva, SwitzerlandDepartment of Clinical Neurosciences, Division of Neurosurgery, Geneva University Medical Center, Geneva, Switzerland; Switzerland & Faculty of Medicine, University of Geneva, SwitzerlandIntroduction: Artificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research question: to assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) performs at both generating questions and answering a neurosurgical residents’ written exam. Namely, to assess how ChatGPT generates questions, how it answers human-generated questions, how residents answer AI-generated questions and how AI answers its self-generated question. Materials and methods: 50 questions were included in the written exam, 46 questions were generated by humans (senior staff members) and 4 were generated by ChatGPT. 11 participants took the exam (ChatGPT and 10 residents). Questions were both open-ended and multiple-choice.8 questions were not submitted to ChatGPT since they contained images or schematic drawings to interpret. Results: formulating requests to ChatGPT required an iterative process to precise both questions and answers. Chat GPT scored among the lowest ranks (9/11) among all the participants). There was no difference in response rate for residents’ between human-generated vs AI-generated questions that could have been attributed to less clarity of the question. ChatGPT answered correctly to all its self-generated questions. Discussion and conclusions: AI is a promising and powerful tool for medical education and for specific medical purposes, which need to be further determined. To request AI to generate logical and sound questions, that request must be formulated as precise as possible, framing the content, the type of question and its correct answers.http://www.sciencedirect.com/science/article/pii/S2772529423010032Artificial intelligenceChatGPTNeurosurgical educationWritten examResidents
spellingShingle A. Bartoli
A.T. May
A. Al-Awadhi
K. Schaller
Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam
Brain and Spine
Artificial intelligence
ChatGPT
Neurosurgical education
Written exam
Residents
title Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam
title_full Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam
title_fullStr Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam
title_full_unstemmed Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam
title_short Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam
title_sort probing artificial intelligence in neurosurgical training chatgpt takes a neurosurgical residents written exam
topic Artificial intelligence
ChatGPT
Neurosurgical education
Written exam
Residents
url http://www.sciencedirect.com/science/article/pii/S2772529423010032
work_keys_str_mv AT abartoli probingartificialintelligenceinneurosurgicaltrainingchatgpttakesaneurosurgicalresidentswrittenexam
AT atmay probingartificialintelligenceinneurosurgicaltrainingchatgpttakesaneurosurgicalresidentswrittenexam
AT aalawadhi probingartificialintelligenceinneurosurgicaltrainingchatgpttakesaneurosurgicalresidentswrittenexam
AT kschaller probingartificialintelligenceinneurosurgicaltrainingchatgpttakesaneurosurgicalresidentswrittenexam