Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks
Abstract Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, wh...
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2023-04-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-023-37777-2 |
_version_ | 1827961168408346624 |
---|---|
author | Talira Kucina Lindsay Wells Ian Lewis Kristy de Salas Amelia Kohl Matthew A. Palmer James D. Sauer Dora Matzke Eugene Aidman Andrew Heathcote |
author_facet | Talira Kucina Lindsay Wells Ian Lewis Kristy de Salas Amelia Kohl Matthew A. Palmer James D. Sauer Dora Matzke Eugene Aidman Andrew Heathcote |
author_sort | Talira Kucina |
collection | DOAJ |
description | Abstract Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out. |
first_indexed | 2024-04-09T16:22:42Z |
format | Article |
id | doaj.art-5073abbd0c4f43469eb9815f1c823801 |
institution | Directory Open Access Journal |
issn | 2041-1723 |
language | English |
last_indexed | 2024-04-09T16:22:42Z |
publishDate | 2023-04-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj.art-5073abbd0c4f43469eb9815f1c8238012023-04-23T11:23:13ZengNature PortfolioNature Communications2041-17232023-04-0114111410.1038/s41467-023-37777-2Calibration of cognitive tests to address the reliability paradox for decision-conflict tasksTalira Kucina0Lindsay Wells1Ian Lewis2Kristy de Salas3Amelia Kohl4Matthew A. Palmer5James D. Sauer6Dora Matzke7Eugene Aidman8Andrew Heathcote9School of Psychological Sciences, University of TasmaniaGames and Creative Technologies Research Group, University of TasmaniaGames and Creative Technologies Research Group, University of TasmaniaGames and Creative Technologies Research Group, University of TasmaniaSchool of Psychological Sciences, University of TasmaniaSchool of Psychological Sciences, University of TasmaniaSchool of Psychological Sciences, University of TasmaniaDepartment of Psychology, University of AmsterdamDefence Science Technology GroupDepartment of Psychology, University of AmsterdamAbstract Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.https://doi.org/10.1038/s41467-023-37777-2 |
spellingShingle | Talira Kucina Lindsay Wells Ian Lewis Kristy de Salas Amelia Kohl Matthew A. Palmer James D. Sauer Dora Matzke Eugene Aidman Andrew Heathcote Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks Nature Communications |
title | Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks |
title_full | Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks |
title_fullStr | Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks |
title_full_unstemmed | Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks |
title_short | Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks |
title_sort | calibration of cognitive tests to address the reliability paradox for decision conflict tasks |
url | https://doi.org/10.1038/s41467-023-37777-2 |
work_keys_str_mv | AT talirakucina calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT lindsaywells calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT ianlewis calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT kristydesalas calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT ameliakohl calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT matthewapalmer calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT jamesdsauer calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT doramatzke calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT eugeneaidman calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks AT andrewheathcote calibrationofcognitiveteststoaddressthereliabilityparadoxfordecisionconflicttasks |