Using screeners to measure respondent attention on self-administered surveys: Which items and how many?
Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal nu...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Cambridge University Press (CUP)
2020
|
Online Access: | https://hdl.handle.net/1721.1/128277 |
_version_ | 1826214285965000704 |
---|---|
author | Berinsky, Adam Margolis, Michele F. Sances, Michael W. Warshaw, Christopher |
author2 | Massachusetts Institute of Technology. Department of Political Science |
author_facet | Massachusetts Institute of Technology. Department of Political Science Berinsky, Adam Margolis, Michele F. Sances, Michael W. Warshaw, Christopher |
author_sort | Berinsky, Adam |
collection | MIT |
description | Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys. |
first_indexed | 2024-09-23T16:03:11Z |
format | Article |
id | mit-1721.1/128277 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T16:03:11Z |
publishDate | 2020 |
publisher | Cambridge University Press (CUP) |
record_format | dspace |
spelling | mit-1721.1/1282772022-10-02T06:01:32Z Using screeners to measure respondent attention on self-administered surveys: Which items and how many? Berinsky, Adam Margolis, Michele F. Sances, Michael W. Warshaw, Christopher Massachusetts Institute of Technology. Department of Political Science Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys. 2020-10-30T19:52:22Z 2020-10-30T19:52:22Z 2019-11 2020-06-04T19:17:05Z Article http://purl.org/eprint/type/JournalArticle 2049-8470 2049-8489 https://hdl.handle.net/1721.1/128277 Berinsky, Adam et al. "Using screeners to measure respondent attention on self-administered surveys: Which items and how many?" Political Science Research and Methods (November 2019): dx.doi.org/10.1017/psrm.2019.53 © 2019 European Political Science Association en http://dx.doi.org/10.1017/psrm.2019.53 Political Science Research and Methods Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Cambridge University Press (CUP) Other repository |
spellingShingle | Berinsky, Adam Margolis, Michele F. Sances, Michael W. Warshaw, Christopher Using screeners to measure respondent attention on self-administered surveys: Which items and how many? |
title | Using screeners to measure respondent attention on self-administered surveys: Which items and how many? |
title_full | Using screeners to measure respondent attention on self-administered surveys: Which items and how many? |
title_fullStr | Using screeners to measure respondent attention on self-administered surveys: Which items and how many? |
title_full_unstemmed | Using screeners to measure respondent attention on self-administered surveys: Which items and how many? |
title_short | Using screeners to measure respondent attention on self-administered surveys: Which items and how many? |
title_sort | using screeners to measure respondent attention on self administered surveys which items and how many |
url | https://hdl.handle.net/1721.1/128277 |
work_keys_str_mv | AT berinskyadam usingscreenerstomeasurerespondentattentiononselfadministeredsurveyswhichitemsandhowmany AT margolismichelef usingscreenerstomeasurerespondentattentiononselfadministeredsurveyswhichitemsandhowmany AT sancesmichaelw usingscreenerstomeasurerespondentattentiononselfadministeredsurveyswhichitemsandhowmany AT warshawchristopher usingscreenerstomeasurerespondentattentiononselfadministeredsurveyswhichitemsandhowmany |