Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.

Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study's aims were to 1) describe the...

Full description

Bibliographic Details
Main Authors: Max M Owens, Alexandra Potter, Courtland S Hyatt, Matthew Albaugh, Wesley K Thompson, Terry Jernigan, Dekang Yuan, Sage Hahn, Nicholas Allgaier, Hugh Garavan
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2021-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0257535
_version_ 1797810943346868224
author Max M Owens
Alexandra Potter
Courtland S Hyatt
Matthew Albaugh
Wesley K Thompson
Terry Jernigan
Dekang Yuan
Sage Hahn
Nicholas Allgaier
Hugh Garavan
author_facet Max M Owens
Alexandra Potter
Courtland S Hyatt
Matthew Albaugh
Wesley K Thompson
Terry Jernigan
Dekang Yuan
Sage Hahn
Nicholas Allgaier
Hugh Garavan
author_sort Max M Owens
collection DOAJ
description Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study's aims were to 1) describe the distribution of effect sizes across multiple instruments, 2) consider factors qualifying the effect size distribution, and 3) identify examples as benchmarks for various effect sizes. For aim one, effect size distributions were illustrated from a large, diverse sample of 9/10-year-old children. This was done by conducting Pearson's correlations among 161 variables representing constructs from all questionnaires and tasks from the Adolescent Brain and Cognitive Development Study® baseline data. To achieve aim two, factors qualifying this distribution were tested by comparing the distributions of effect size among various modifications of the aim one analyses. These modified analytic strategies included comparisons of effect size distributions for different types of variables, for analyses using statistical thresholds, and for analyses using several covariate strategies. In aim one analyses, the median in-sample effect size was .03, and values at the first and third quartiles were .01 and .07. In aim two analyses, effects were smaller for associations across instruments, content domains, and reporters, as well as when covarying for sociodemographic factors. Effect sizes were larger when thresholding for statistical significance. In analyses intended to mimic conditions used in "real-world" analysis of ABCD data, the median in-sample effect size was .05, and values at the first and third quartiles were .03 and .09. To achieve aim three, examples for varying effect sizes are reported from the ABCD dataset as benchmarks for future work in the dataset. In summary, this report finds that empirically determined effect sizes from a notably large dataset are smaller than would be expected based on existing heuristics.
first_indexed 2024-03-13T07:16:17Z
format Article
id doaj.art-ade5146395a54c36ac789bb92c229451
institution Directory Open Access Journal
issn 1932-6203
language English
last_indexed 2024-03-13T07:16:17Z
publishDate 2021-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj.art-ade5146395a54c36ac789bb92c2294512023-06-05T05:32:35ZengPublic Library of Science (PLoS)PLoS ONE1932-62032021-01-01169e025753510.1371/journal.pone.0257535Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.Max M OwensAlexandra PotterCourtland S HyattMatthew AlbaughWesley K ThompsonTerry JerniganDekang YuanSage HahnNicholas AllgaierHugh GaravanEffect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study's aims were to 1) describe the distribution of effect sizes across multiple instruments, 2) consider factors qualifying the effect size distribution, and 3) identify examples as benchmarks for various effect sizes. For aim one, effect size distributions were illustrated from a large, diverse sample of 9/10-year-old children. This was done by conducting Pearson's correlations among 161 variables representing constructs from all questionnaires and tasks from the Adolescent Brain and Cognitive Development Study® baseline data. To achieve aim two, factors qualifying this distribution were tested by comparing the distributions of effect size among various modifications of the aim one analyses. These modified analytic strategies included comparisons of effect size distributions for different types of variables, for analyses using statistical thresholds, and for analyses using several covariate strategies. In aim one analyses, the median in-sample effect size was .03, and values at the first and third quartiles were .01 and .07. In aim two analyses, effects were smaller for associations across instruments, content domains, and reporters, as well as when covarying for sociodemographic factors. Effect sizes were larger when thresholding for statistical significance. In analyses intended to mimic conditions used in "real-world" analysis of ABCD data, the median in-sample effect size was .05, and values at the first and third quartiles were .03 and .09. To achieve aim three, examples for varying effect sizes are reported from the ABCD dataset as benchmarks for future work in the dataset. In summary, this report finds that empirically determined effect sizes from a notably large dataset are smaller than would be expected based on existing heuristics.https://doi.org/10.1371/journal.pone.0257535
spellingShingle Max M Owens
Alexandra Potter
Courtland S Hyatt
Matthew Albaugh
Wesley K Thompson
Terry Jernigan
Dekang Yuan
Sage Hahn
Nicholas Allgaier
Hugh Garavan
Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
PLoS ONE
title Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
title_full Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
title_fullStr Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
title_full_unstemmed Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
title_short Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study.
title_sort recalibrating expectations about effect size a multi method survey of effect sizes in the abcd study
url https://doi.org/10.1371/journal.pone.0257535
work_keys_str_mv AT maxmowens recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT alexandrapotter recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT courtlandshyatt recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT matthewalbaugh recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT wesleykthompson recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT terryjernigan recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT dekangyuan recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT sagehahn recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT nicholasallgaier recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy
AT hughgaravan recalibratingexpectationsabouteffectsizeamultimethodsurveyofeffectsizesintheabcdstudy