Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects
Peer grading affords a scalable and sustainable way of providing assessment and feedback to a massive student population, and has been used in massive open online courses (MOOCs) on the Coursera platform. However, currently there is little empirical evidence to support the credentials of peer gradin...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Online Learning Consortium
2014-06-01
|
Series: | Online Learning |
Subjects: | |
Online Access: | https://olj.onlinelearningconsortium.org/index.php/olj/article/view/429 |
_version_ | 1797327809328185344 |
---|---|
author | Heng Luo Anthony C. Robinson Jae-Young Park |
author_facet | Heng Luo Anthony C. Robinson Jae-Young Park |
author_sort | Heng Luo |
collection | DOAJ |
description | Peer grading affords a scalable and sustainable way of providing assessment and feedback to a massive student population, and has been used in massive open online courses (MOOCs) on the Coursera platform. However, currently there is little empirical evidence to support the credentials of peer grading as a learning assessment method in the MOOC context. To address this research need, this study examined 1825 peer grading assignments collected from a Coursera MOOC with the purpose of investigating the reliability and validity of peer grading as well as its perceived effects on students’ MOOC learning experience. The empirical findings proved that the aggregate ratings of student graders can provide peer grading scores that were fairly consistent and highly similar to the instructor grading scores. Student responses to a survey also show that the peer grading activity was well received as the majority of MOOC students believed it was fair, useful, beneficial, and would recommend it to be included in future MOOC offerings. Based on the empirical results, this study concludes with a set of principles for designing and implementing peer grading activities in the MOOC context. |
first_indexed | 2024-03-08T06:42:43Z |
format | Article |
id | doaj.art-e6ce444512a84d7e9773b7db8fc90644 |
institution | Directory Open Access Journal |
issn | 2472-5749 2472-5730 |
language | English |
last_indexed | 2024-03-08T06:42:43Z |
publishDate | 2014-06-01 |
publisher | Online Learning Consortium |
record_format | Article |
series | Online Learning |
spelling | doaj.art-e6ce444512a84d7e9773b7db8fc906442024-02-03T08:25:33ZengOnline Learning ConsortiumOnline Learning2472-57492472-57302014-06-0118210.24059/olj.v18i2.429Peer Grading in a MOOC: Reliability, Validity, and Perceived EffectsHeng Luo0Anthony C. Robinson1Jae-Young Park2John A. Dutton E-Education Institute, The Pennsylvania State UniversityDepartment of Geography, The Pennsylvania State UniversityJohn A. Dutton E-Education Institute, The Pennsylvania State UniversityPeer grading affords a scalable and sustainable way of providing assessment and feedback to a massive student population, and has been used in massive open online courses (MOOCs) on the Coursera platform. However, currently there is little empirical evidence to support the credentials of peer grading as a learning assessment method in the MOOC context. To address this research need, this study examined 1825 peer grading assignments collected from a Coursera MOOC with the purpose of investigating the reliability and validity of peer grading as well as its perceived effects on students’ MOOC learning experience. The empirical findings proved that the aggregate ratings of student graders can provide peer grading scores that were fairly consistent and highly similar to the instructor grading scores. Student responses to a survey also show that the peer grading activity was well received as the majority of MOOC students believed it was fair, useful, beneficial, and would recommend it to be included in future MOOC offerings. Based on the empirical results, this study concludes with a set of principles for designing and implementing peer grading activities in the MOOC context.https://olj.onlinelearningconsortium.org/index.php/olj/article/view/429peer gradingMOOCreliabilityvalidity |
spellingShingle | Heng Luo Anthony C. Robinson Jae-Young Park Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects Online Learning peer grading MOOC reliability validity |
title | Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects |
title_full | Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects |
title_fullStr | Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects |
title_full_unstemmed | Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects |
title_short | Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects |
title_sort | peer grading in a mooc reliability validity and perceived effects |
topic | peer grading MOOC reliability validity |
url | https://olj.onlinelearningconsortium.org/index.php/olj/article/view/429 |
work_keys_str_mv | AT hengluo peergradinginamoocreliabilityvalidityandperceivedeffects AT anthonycrobinson peergradinginamoocreliabilityvalidityandperceivedeffects AT jaeyoungpark peergradinginamoocreliabilityvalidityandperceivedeffects |