Examining consistency among different rubrics for assessing writing
Abstract The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2020-09-01
|
Series: | Language Testing in Asia |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s40468-020-00111-4 |
_version_ | 1818234664888827904 |
---|---|
author | Enayat A. Shabani Jaleh Panahi |
author_facet | Enayat A. Shabani Jaleh Panahi |
author_sort | Enayat A. Shabani |
collection | DOAJ |
description | Abstract The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters’ interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores. |
first_indexed | 2024-12-12T11:41:41Z |
format | Article |
id | doaj.art-cec1200186ed4b22a216fffa97ffb093 |
institution | Directory Open Access Journal |
issn | 2229-0443 |
language | English |
last_indexed | 2024-12-12T11:41:41Z |
publishDate | 2020-09-01 |
publisher | SpringerOpen |
record_format | Article |
series | Language Testing in Asia |
spelling | doaj.art-cec1200186ed4b22a216fffa97ffb0932022-12-22T00:25:31ZengSpringerOpenLanguage Testing in Asia2229-04432020-09-0110112510.1186/s40468-020-00111-4Examining consistency among different rubrics for assessing writingEnayat A. Shabani0Jaleh Panahi1Department of Foreign Languages, TUMS International College, Tehran University of Medical Sciences (TUMS)Department of Foreign Languages, TUMS International College, Tehran University of Medical Sciences (TUMS)Abstract The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters’ interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores.http://link.springer.com/article/10.1186/s40468-020-00111-4Scoring rubricsEssay writingTests of English language proficiencyWriting assessment |
spellingShingle | Enayat A. Shabani Jaleh Panahi Examining consistency among different rubrics for assessing writing Language Testing in Asia Scoring rubrics Essay writing Tests of English language proficiency Writing assessment |
title | Examining consistency among different rubrics for assessing writing |
title_full | Examining consistency among different rubrics for assessing writing |
title_fullStr | Examining consistency among different rubrics for assessing writing |
title_full_unstemmed | Examining consistency among different rubrics for assessing writing |
title_short | Examining consistency among different rubrics for assessing writing |
title_sort | examining consistency among different rubrics for assessing writing |
topic | Scoring rubrics Essay writing Tests of English language proficiency Writing assessment |
url | http://link.springer.com/article/10.1186/s40468-020-00111-4 |
work_keys_str_mv | AT enayatashabani examiningconsistencyamongdifferentrubricsforassessingwriting AT jalehpanahi examiningconsistencyamongdifferentrubricsforassessingwriting |