STARC: Structured Annotations for Reading Comprehension

We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is im...

Full description

Bibliographic Details
Main Authors: Berzak, Yevgeni, Malmaud, Jonathan, Levy, Roger
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Format: Article
Language:English
Published: Association for Computational Linguistics (ACL) 2021
Online Access:https://hdl.handle.net/1721.1/138279
_version_ 1826209471209144320
author Berzak, Yevgeni
Malmaud, Jonathan
Levy, Roger
author2 Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
author_facet Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Berzak, Yevgeni
Malmaud, Jonathan
Levy, Roger
author_sort Berzak, Yevgeni
collection MIT
description We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE (Lai et al., 2017), is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.
first_indexed 2024-09-23T14:23:02Z
format Article
id mit-1721.1/138279
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T14:23:02Z
publishDate 2021
publisher Association for Computational Linguistics (ACL)
record_format dspace
spelling mit-1721.1/1382792023-02-10T21:01:58Z STARC: Structured Annotations for Reading Comprehension Berzak, Yevgeni Malmaud, Jonathan Levy, Roger Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE (Lai et al., 2017), is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance. 2021-12-01T17:44:26Z 2021-12-01T17:44:26Z 2020 2021-12-01T17:42:13Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/138279 Berzak, Yevgeni, Malmaud, Jonathan and Levy, Roger. 2020. "STARC: Structured Annotations for Reading Comprehension." Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. en 10.18653/V1/2020.ACL-MAIN.507 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Association for Computational Linguistics (ACL) Association for Computational Linguistics
spellingShingle Berzak, Yevgeni
Malmaud, Jonathan
Levy, Roger
STARC: Structured Annotations for Reading Comprehension
title STARC: Structured Annotations for Reading Comprehension
title_full STARC: Structured Annotations for Reading Comprehension
title_fullStr STARC: Structured Annotations for Reading Comprehension
title_full_unstemmed STARC: Structured Annotations for Reading Comprehension
title_short STARC: Structured Annotations for Reading Comprehension
title_sort starc structured annotations for reading comprehension
url https://hdl.handle.net/1721.1/138279
work_keys_str_mv AT berzakyevgeni starcstructuredannotationsforreadingcomprehension
AT malmaudjonathan starcstructuredannotationsforreadingcomprehension
AT levyroger starcstructuredannotationsforreadingcomprehension