How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT

This study aimed at evaluating how students perceive the linguistic quality and scientific accuracy of ChatGPT responses to physics comprehension questions. A total of 102 first- and second-year physics students were confronted with three questions of progressing difficulty from introductory mechani...

Full description

Bibliographic Details
Main Authors: Merten Nikolay Dahlkemper, Simon Zacharias Lahme, Pascal Klein
Format: Article
Language:English
Published: American Physical Society 2023-06-01
Series:Physical Review Physics Education Research
Online Access:http://doi.org/10.1103/PhysRevPhysEducRes.19.010142
_version_ 1797796654355578880
author Merten Nikolay Dahlkemper
Simon Zacharias Lahme
Pascal Klein
author_facet Merten Nikolay Dahlkemper
Simon Zacharias Lahme
Pascal Klein
author_sort Merten Nikolay Dahlkemper
collection DOAJ
description This study aimed at evaluating how students perceive the linguistic quality and scientific accuracy of ChatGPT responses to physics comprehension questions. A total of 102 first- and second-year physics students were confronted with three questions of progressing difficulty from introductory mechanics (rolling motion, waves, and fluid dynamics). Each question was presented with four different responses. All responses were attributed to ChatGPT, but in reality, one sample solution was created by the researchers. All ChatGPT responses obtained in this study were wrong, imprecise, incomplete, or misleading. We found little differences in the perceived linguistic quality between ChatGPT responses and the sample solution. However, the students rated the overall scientific accuracy of the responses significantly differently, with the sample solution being rated best for the questions of low and medium difficulty. The discrepancy between the sample solution and the ChatGPT responses increased with the level of self-assessed knowledge of the question content. For the question of highest difficulty (fluid dynamics) that was unknown to most students, a ChatGPT response was rated just as good as the sample solution. Thus, this study provides data on the students’ perception of ChatGPT responses and the factors influencing their perception. The results highlight the need for careful evaluation of ChatGPT responses both by instructors and students, particularly regarding scientific accuracy. Therefore, future research could explore the potential of similar “spot the bot” activities in physics education to foster students’ critical thinking skills.
first_indexed 2024-03-13T03:36:18Z
format Article
id doaj.art-51fc1c9c9441486ca9a5b6ce09295bd0
institution Directory Open Access Journal
issn 2469-9896
language English
last_indexed 2024-03-13T03:36:18Z
publishDate 2023-06-01
publisher American Physical Society
record_format Article
series Physical Review Physics Education Research
spelling doaj.art-51fc1c9c9441486ca9a5b6ce09295bd02023-06-23T17:24:29ZengAmerican Physical SocietyPhysical Review Physics Education Research2469-98962023-06-0119101014210.1103/PhysRevPhysEducRes.19.010142How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPTMerten Nikolay DahlkemperSimon Zacharias LahmePascal KleinThis study aimed at evaluating how students perceive the linguistic quality and scientific accuracy of ChatGPT responses to physics comprehension questions. A total of 102 first- and second-year physics students were confronted with three questions of progressing difficulty from introductory mechanics (rolling motion, waves, and fluid dynamics). Each question was presented with four different responses. All responses were attributed to ChatGPT, but in reality, one sample solution was created by the researchers. All ChatGPT responses obtained in this study were wrong, imprecise, incomplete, or misleading. We found little differences in the perceived linguistic quality between ChatGPT responses and the sample solution. However, the students rated the overall scientific accuracy of the responses significantly differently, with the sample solution being rated best for the questions of low and medium difficulty. The discrepancy between the sample solution and the ChatGPT responses increased with the level of self-assessed knowledge of the question content. For the question of highest difficulty (fluid dynamics) that was unknown to most students, a ChatGPT response was rated just as good as the sample solution. Thus, this study provides data on the students’ perception of ChatGPT responses and the factors influencing their perception. The results highlight the need for careful evaluation of ChatGPT responses both by instructors and students, particularly regarding scientific accuracy. Therefore, future research could explore the potential of similar “spot the bot” activities in physics education to foster students’ critical thinking skills.http://doi.org/10.1103/PhysRevPhysEducRes.19.010142
spellingShingle Merten Nikolay Dahlkemper
Simon Zacharias Lahme
Pascal Klein
How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
Physical Review Physics Education Research
title How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
title_full How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
title_fullStr How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
title_full_unstemmed How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
title_short How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
title_sort how do physics students evaluate artificial intelligence responses on comprehension questions a study on the perceived scientific accuracy and linguistic quality of chatgpt
url http://doi.org/10.1103/PhysRevPhysEducRes.19.010142
work_keys_str_mv AT mertennikolaydahlkemper howdophysicsstudentsevaluateartificialintelligenceresponsesoncomprehensionquestionsastudyontheperceivedscientificaccuracyandlinguisticqualityofchatgpt
AT simonzachariaslahme howdophysicsstudentsevaluateartificialintelligenceresponsesoncomprehensionquestionsastudyontheperceivedscientificaccuracyandlinguisticqualityofchatgpt
AT pascalklein howdophysicsstudentsevaluateartificialintelligenceresponsesoncomprehensionquestionsastudyontheperceivedscientificaccuracyandlinguisticqualityofchatgpt