Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses
The use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Athabasca University Press
2014-11-01
|
Series: | International Review of Research in Open and Distributed Learning |
Subjects: | |
Online Access: | http://www.irrodl.org/index.php/irrodl/article/view/1857/3067 |
_version_ | 1819034888278376448 |
---|---|
author | Erin Dawna Reilly Rose Eleanore Stafford Kyle Marie Williams Stephanie Brooks Corliss |
author_facet | Erin Dawna Reilly Rose Eleanore Stafford Kyle Marie Williams Stephanie Brooks Corliss |
author_sort | Erin Dawna Reilly |
collection | DOAJ |
description | The use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated essay scoring (AES) systems that allow students to engage in critical writing and free-response activities. However, there is a lack of research investigating the validity of such systems in MOOCs. This research examined the effectiveness of an AES tool to score writing assignments in two MOOCs. Results indicated that some significant differences existed between Instructor grading, AES-Holistic scores, and AES-Rubric Total scores within two MOOC courses. However, use of the AES system may still be useful given instructors’ assessment needs and intent. Findings from this research have implications for instructional technology administrators, educational designers, and instructors implementing AES learning activities in MOOC courses. |
first_indexed | 2024-12-21T07:40:53Z |
format | Article |
id | doaj.art-c5a84dedffaa4bb0866fbc204be4333c |
institution | Directory Open Access Journal |
issn | 1492-3831 |
language | English |
last_indexed | 2024-12-21T07:40:53Z |
publishDate | 2014-11-01 |
publisher | Athabasca University Press |
record_format | Article |
series | International Review of Research in Open and Distributed Learning |
spelling | doaj.art-c5a84dedffaa4bb0866fbc204be4333c2022-12-21T19:11:19ZengAthabasca University PressInternational Review of Research in Open and Distributed Learning1492-38312014-11-01155Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online CoursesErin Dawna Reilly0Rose Eleanore Stafford1Kyle Marie Williams2Stephanie Brooks Corliss 3University of Texas at Austin, United StatesUniversity of Texas at Austin, United StatesUniversity of Texas at Austin, United StatesUniversity of Texas at Austin, United StatesThe use of massive open online courses (MOOCs) to expand students’ access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated essay scoring (AES) systems that allow students to engage in critical writing and free-response activities. However, there is a lack of research investigating the validity of such systems in MOOCs. This research examined the effectiveness of an AES tool to score writing assignments in two MOOCs. Results indicated that some significant differences existed between Instructor grading, AES-Holistic scores, and AES-Rubric Total scores within two MOOC courses. However, use of the AES system may still be useful given instructors’ assessment needs and intent. Findings from this research have implications for instructional technology administrators, educational designers, and instructors implementing AES learning activities in MOOC courses.http://www.irrodl.org/index.php/irrodl/article/view/1857/3067Massive open online courses; assessment; automated essay scoring systems |
spellingShingle | Erin Dawna Reilly Rose Eleanore Stafford Kyle Marie Williams Stephanie Brooks Corliss Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses International Review of Research in Open and Distributed Learning Massive open online courses; assessment; automated essay scoring systems |
title | Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses |
title_full | Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses |
title_fullStr | Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses |
title_full_unstemmed | Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses |
title_short | Evaluating the Validity and Applicability of Automated Essay Scoring in Two Massive Open Online Courses |
title_sort | evaluating the validity and applicability of automated essay scoring in two massive open online courses |
topic | Massive open online courses; assessment; automated essay scoring systems |
url | http://www.irrodl.org/index.php/irrodl/article/view/1857/3067 |
work_keys_str_mv | AT erindawnareilly evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses AT roseeleanorestafford evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses AT kylemariewilliams evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses AT stephaniebrookscorliss evaluatingthevalidityandapplicabilityofautomatedessayscoringintwomassiveopenonlinecourses |