Assessing student errors in experimentation using artificial intelligence and large language models: A comparative study with human raters
Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students’ experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate the potential of Large Language Models (LLMs) for automatically id...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2023-01-01
|
Series: | Computers and Education: Artificial Intelligence |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2666920X23000565 |