Acceptability judgement tasks and grammatical theory

<p>This thesis considers various questions about acceptability judgement tasks (AJTs).</p> <p>In Chapter 1, we compare the prevalent informal method of syntactic enquiry, researcher introspection, to formal judgement tasks. We randomly sample 200 sentences from Linguistic Inquiry a...

Full description

Bibliographic Details
Main Author: Juzek, T
Other Authors: Dalrymple, M
Format: Thesis
Language:English
Published: 2015
Subjects:
_version_ 1797089559953014784
author Juzek, T
author2 Dalrymple, M
author_facet Dalrymple, M
Juzek, T
author_sort Juzek, T
collection OXFORD
description <p>This thesis considers various questions about acceptability judgement tasks (AJTs).</p> <p>In Chapter 1, we compare the prevalent informal method of syntactic enquiry, researcher introspection, to formal judgement tasks. We randomly sample 200 sentences from Linguistic Inquiry and then compare the original author judgements to online AJT ratings. Sprouse et al., 2013, provided a similar comparison, but they limited their analysis to the comparison of sentence pairs and to extreme cases. We think a comparison at large, i.e. involving all items, is more sensible. We find only a moderate match between informal author judgements and formal online ratings and argue that the formal judgements are more reliable than the informal judgements. Further, the fact that many syntactic theories rely on questionable informal data calls the adequacy of those theories into question.<p> <p>In Chapter 2, we test whether ratings for constructions from spoken language and constructions from written language differ if presented as speech vs as text and if presented informally vs formally. We analyse the results with an LME model and find that neither mode of presentation nor formality are significant factors. Our results suggest that a speaker’s grammatical intuition is fairly robust.<p> <p>In Chapter 3, we quantitatively compare regular AJT data to their Z-scores and ranked data. For our analysis, we test resampled data for significant differences in statistical power. We find that Z-scores and ranked data are more powerful than raw data across most common measurement methods.<p> <p>Chapter 4 examines issues surrounding a common similarity test, the TOST. It has long been unclear how to set its controlling parameter δ. Based on data simulations, we outline a way to objectively set δ. Further results suggest that our guidelines hold for any kind of data.<p> <p>The thesis concludes with an appendix on non-cooperative participants in AJTs.<p></p></p></p></p></p></p></p></p></p></p>
first_indexed 2024-03-07T03:05:47Z
format Thesis
id oxford-uuid:b276ec98-5f65-468b-b481-f3d9356d86a2
institution University of Oxford
language English
last_indexed 2024-03-07T03:05:47Z
publishDate 2015
record_format dspace
spelling oxford-uuid:b276ec98-5f65-468b-b481-f3d9356d86a22022-03-27T04:11:55ZAcceptability judgement tasks and grammatical theoryThesishttp://purl.org/coar/resource_type/c_db06uuid:b276ec98-5f65-468b-b481-f3d9356d86a2LinguisticsComputational linguisticsGrammar, Comparative and generalResearch--MethodologyEnglishORA Deposit2015Juzek, TDalrymple, MKochanski, G<p>This thesis considers various questions about acceptability judgement tasks (AJTs).</p> <p>In Chapter 1, we compare the prevalent informal method of syntactic enquiry, researcher introspection, to formal judgement tasks. We randomly sample 200 sentences from Linguistic Inquiry and then compare the original author judgements to online AJT ratings. Sprouse et al., 2013, provided a similar comparison, but they limited their analysis to the comparison of sentence pairs and to extreme cases. We think a comparison at large, i.e. involving all items, is more sensible. We find only a moderate match between informal author judgements and formal online ratings and argue that the formal judgements are more reliable than the informal judgements. Further, the fact that many syntactic theories rely on questionable informal data calls the adequacy of those theories into question.<p> <p>In Chapter 2, we test whether ratings for constructions from spoken language and constructions from written language differ if presented as speech vs as text and if presented informally vs formally. We analyse the results with an LME model and find that neither mode of presentation nor formality are significant factors. Our results suggest that a speaker’s grammatical intuition is fairly robust.<p> <p>In Chapter 3, we quantitatively compare regular AJT data to their Z-scores and ranked data. For our analysis, we test resampled data for significant differences in statistical power. We find that Z-scores and ranked data are more powerful than raw data across most common measurement methods.<p> <p>Chapter 4 examines issues surrounding a common similarity test, the TOST. It has long been unclear how to set its controlling parameter δ. Based on data simulations, we outline a way to objectively set δ. Further results suggest that our guidelines hold for any kind of data.<p> <p>The thesis concludes with an appendix on non-cooperative participants in AJTs.<p></p></p></p></p></p></p></p></p></p></p>
spellingShingle Linguistics
Computational linguistics
Grammar, Comparative and general
Research--Methodology
Juzek, T
Acceptability judgement tasks and grammatical theory
title Acceptability judgement tasks and grammatical theory
title_full Acceptability judgement tasks and grammatical theory
title_fullStr Acceptability judgement tasks and grammatical theory
title_full_unstemmed Acceptability judgement tasks and grammatical theory
title_short Acceptability judgement tasks and grammatical theory
title_sort acceptability judgement tasks and grammatical theory
topic Linguistics
Computational linguistics
Grammar, Comparative and general
Research--Methodology
work_keys_str_mv AT juzekt acceptabilityjudgementtasksandgrammaticaltheory