Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment
A growing number of courses seek to increase the basic artificial-intelligence skills (“AI literacy”) of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-09-01
|
Series: | Education Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7102/13/10/978 |
_version_ | 1797574014210670592 |
---|---|
author | Matthias Carl Laupichler Alexandra Aster Jan-Ole Perschewski Johannes Schleiss |
author_facet | Matthias Carl Laupichler Alexandra Aster Jan-Ole Perschewski Johannes Schleiss |
author_sort | Matthias Carl Laupichler |
collection | DOAJ |
description | A growing number of courses seek to increase the basic artificial-intelligence skills (“AI literacy”) of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable quality assurance and comparability. In this study, a validated AI-literacy-assessment instrument, the “scale for the assessment of non-experts’ AI literacy” (SNAIL) was adapted and used to evaluate an undergraduate AI course. We investigated whether the scale can be used to reliably evaluate AI courses and whether mediator variables, such as attitudes toward AI or participation in other AI courses, had an influence on learning gains. In addition to the traditional mean comparisons (i.e., <i>t</i>-tests), the comparative self-assessment (CSA) gain was calculated, which allowed for a more meaningful assessment of the increase in AI literacy. We found preliminary evidence that the adapted SNAIL questionnaire enables a valid evaluation of AI-learning gains. In particular, distinctions among different subconstructs and the differentiation constructs, such as attitudes toward AI, seem to be possible with the help of the SNAIL questionnaire. |
first_indexed | 2024-03-10T21:18:15Z |
format | Article |
id | doaj.art-0e0e98d50efc419784f143a14a8e84fd |
institution | Directory Open Access Journal |
issn | 2227-7102 |
language | English |
last_indexed | 2024-03-10T21:18:15Z |
publishDate | 2023-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Education Sciences |
spelling | doaj.art-0e0e98d50efc419784f143a14a8e84fd2023-11-19T16:16:24ZengMDPI AGEducation Sciences2227-71022023-09-01131097810.3390/educsci13100978Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-AssessmentMatthias Carl Laupichler0Alexandra Aster1Jan-Ole Perschewski2Johannes Schleiss3Institute of Medical Education, University Hospital Bonn, 53127 Bonn, GermanyInstitute of Medical Education, University Hospital Bonn, 53127 Bonn, GermanyArtificial Intelligence Lab, Otto von Guericke University Magdeburg, 39106 Magdeburg, GermanyArtificial Intelligence Lab, Otto von Guericke University Magdeburg, 39106 Magdeburg, GermanyA growing number of courses seek to increase the basic artificial-intelligence skills (“AI literacy”) of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable quality assurance and comparability. In this study, a validated AI-literacy-assessment instrument, the “scale for the assessment of non-experts’ AI literacy” (SNAIL) was adapted and used to evaluate an undergraduate AI course. We investigated whether the scale can be used to reliably evaluate AI courses and whether mediator variables, such as attitudes toward AI or participation in other AI courses, had an influence on learning gains. In addition to the traditional mean comparisons (i.e., <i>t</i>-tests), the comparative self-assessment (CSA) gain was calculated, which allowed for a more meaningful assessment of the increase in AI literacy. We found preliminary evidence that the adapted SNAIL questionnaire enables a valid evaluation of AI-learning gains. In particular, distinctions among different subconstructs and the differentiation constructs, such as attitudes toward AI, seem to be possible with the help of the SNAIL questionnaire.https://www.mdpi.com/2227-7102/13/10/978AI literacyAI-literacy scaleartificial intelligence educationassessmentcourse evaluationcomparative self-assessment |
spellingShingle | Matthias Carl Laupichler Alexandra Aster Jan-Ole Perschewski Johannes Schleiss Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment Education Sciences AI literacy AI-literacy scale artificial intelligence education assessment course evaluation comparative self-assessment |
title | Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment |
title_full | Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment |
title_fullStr | Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment |
title_full_unstemmed | Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment |
title_short | Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment |
title_sort | evaluating ai courses a valid and reliable instrument for assessing artificial intelligence learning through comparative self assessment |
topic | AI literacy AI-literacy scale artificial intelligence education assessment course evaluation comparative self-assessment |
url | https://www.mdpi.com/2227-7102/13/10/978 |
work_keys_str_mv | AT matthiascarllaupichler evaluatingaicoursesavalidandreliableinstrumentforassessingartificialintelligencelearningthroughcomparativeselfassessment AT alexandraaster evaluatingaicoursesavalidandreliableinstrumentforassessingartificialintelligencelearningthroughcomparativeselfassessment AT janoleperschewski evaluatingaicoursesavalidandreliableinstrumentforassessingartificialintelligencelearningthroughcomparativeselfassessment AT johannesschleiss evaluatingaicoursesavalidandreliableinstrumentforassessingartificialintelligencelearningthroughcomparativeselfassessment |