Severity Differences across Proficiency Levels among Peer-assessors

Over the past few years, peer-assessment, as an alternative assessment procedure, has drawn the attention of many researchers. In the study, it was attempted to find what kinds of language components peer-assessors attend to when rating their peers' essays and to investigate whether proficiency...

Full description

Bibliographic Details
Main Authors: Shahla Rasouli, Rajab Esfandiari
Format: Article
Language:English
Published: Imam Khomeini International University, Qazvin, 2022-03-01
Series:Journal of Modern Research in English Language Studies
Subjects:
Online Access:https://jmrels.journals.ikiu.ac.ir/article_2635_5d447c3bc82c363a5385bf84def68db7.pdf
_version_ 1797829599510396928
author Shahla Rasouli
Rajab Esfandiari
author_facet Shahla Rasouli
Rajab Esfandiari
author_sort Shahla Rasouli
collection DOAJ
description Over the past few years, peer-assessment, as an alternative assessment procedure, has drawn the attention of many researchers. In the study, it was attempted to find what kinds of language components peer-assessors attend to when rating their peers' essays and to investigate whether proficiency levels of peer-assessors make a difference in terms of severity and leniency they exercise. Fifty-eight student raters at Imam Khomeini International University in Qazvin rated five essays, using an analytic rating scale. Paper-based test of English as a foreign language (TOEFL) and five-paragraph essays were used to collect the data. FACETS (version 3.68.1) was used to analyze the data. The results of Facets analysis indicated that advanced peer-assessors had more variability in their severity compared to intermediate peer-assessors. Moreover, the majority of peer-assessors were, on average, more severe than lenient. The results also revealed no statistically significant difference between the ratings of intermediate and advanced peer-assessors. The final finding was that task achievement was the most attended assessment criterion, but grammatical range and accuracy was the least attended assessment criterion. The findings suggest peer-assessors do not attach an equal weight to all assessment criteria. The findings of the study may carry implications for the summative assessment of students' abilities.
first_indexed 2024-04-09T13:22:51Z
format Article
id doaj.art-20440d4993044e50a3e36ff6be9cfd2a
institution Directory Open Access Journal
issn 2676-5357
language English
last_indexed 2024-04-09T13:22:51Z
publishDate 2022-03-01
publisher Imam Khomeini International University, Qazvin,
record_format Article
series Journal of Modern Research in English Language Studies
spelling doaj.art-20440d4993044e50a3e36ff6be9cfd2a2023-05-10T20:03:20ZengImam Khomeini International University, Qazvin,Journal of Modern Research in English Language Studies2676-53572022-03-019217319610.30479/jmrels.2022.16763.20142635Severity Differences across Proficiency Levels among Peer-assessorsShahla Rasouli0Rajab Esfandiari1Department of English Language, Payame Nour UniversityImam Khomeini international UniversityOver the past few years, peer-assessment, as an alternative assessment procedure, has drawn the attention of many researchers. In the study, it was attempted to find what kinds of language components peer-assessors attend to when rating their peers' essays and to investigate whether proficiency levels of peer-assessors make a difference in terms of severity and leniency they exercise. Fifty-eight student raters at Imam Khomeini International University in Qazvin rated five essays, using an analytic rating scale. Paper-based test of English as a foreign language (TOEFL) and five-paragraph essays were used to collect the data. FACETS (version 3.68.1) was used to analyze the data. The results of Facets analysis indicated that advanced peer-assessors had more variability in their severity compared to intermediate peer-assessors. Moreover, the majority of peer-assessors were, on average, more severe than lenient. The results also revealed no statistically significant difference between the ratings of intermediate and advanced peer-assessors. The final finding was that task achievement was the most attended assessment criterion, but grammatical range and accuracy was the least attended assessment criterion. The findings suggest peer-assessors do not attach an equal weight to all assessment criteria. The findings of the study may carry implications for the summative assessment of students' abilities.https://jmrels.journals.ikiu.ac.ir/article_2635_5d447c3bc82c363a5385bf84def68db7.pdfcriterionpeer-assessmentproficiency levelseverity
spellingShingle Shahla Rasouli
Rajab Esfandiari
Severity Differences across Proficiency Levels among Peer-assessors
Journal of Modern Research in English Language Studies
criterion
peer-assessment
proficiency level
severity
title Severity Differences across Proficiency Levels among Peer-assessors
title_full Severity Differences across Proficiency Levels among Peer-assessors
title_fullStr Severity Differences across Proficiency Levels among Peer-assessors
title_full_unstemmed Severity Differences across Proficiency Levels among Peer-assessors
title_short Severity Differences across Proficiency Levels among Peer-assessors
title_sort severity differences across proficiency levels among peer assessors
topic criterion
peer-assessment
proficiency level
severity
url https://jmrels.journals.ikiu.ac.ir/article_2635_5d447c3bc82c363a5385bf84def68db7.pdf
work_keys_str_mv AT shahlarasouli severitydifferencesacrossproficiencylevelsamongpeerassessors
AT rajabesfandiari severitydifferencesacrossproficiencylevelsamongpeerassessors