Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam

Introduction: Artificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research question: to assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) perfor...

Full description

Bibliographic Details
Main Authors: A. Bartoli, A.T. May, A. Al-Awadhi, K. Schaller
Format: Article
Language:English
Published: Elsevier 2024-01-01
Series:Brain and Spine
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2772529423010032
Description
Summary:Introduction: Artificial Intelligence tools are being introduced in almost every field of human life, including medical sciences and medical education, among scepticism and enthusiasm. Research question: to assess how a generative language tool (Generative Pretrained Transformer 3.5, ChatGPT) performs at both generating questions and answering a neurosurgical residents’ written exam. Namely, to assess how ChatGPT generates questions, how it answers human-generated questions, how residents answer AI-generated questions and how AI answers its self-generated question. Materials and methods: 50 questions were included in the written exam, 46 questions were generated by humans (senior staff members) and 4 were generated by ChatGPT. 11 participants took the exam (ChatGPT and 10 residents). Questions were both open-ended and multiple-choice.8 questions were not submitted to ChatGPT since they contained images or schematic drawings to interpret. Results: formulating requests to ChatGPT required an iterative process to precise both questions and answers. Chat GPT scored among the lowest ranks (9/11) among all the participants). There was no difference in response rate for residents’ between human-generated vs AI-generated questions that could have been attributed to less clarity of the question. ChatGPT answered correctly to all its self-generated questions. Discussion and conclusions: AI is a promising and powerful tool for medical education and for specific medical purposes, which need to be further determined. To request AI to generate logical and sound questions, that request must be formulated as precise as possible, framing the content, the type of question and its correct answers.
ISSN:2772-5294