Testing the Magnitude of Correlations Across Experimental Conditions
Correlation coefficients are often compared to investigate data across multiple research fields, as they allow investigators to determine different degrees of correlation to independent variables. Even with adequate sample size, such differences may be minor but still scientifically relevant. To dat...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2022-05-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyg.2022.860213/full |
_version_ | 1818549921065730048 |
---|---|
author | Simone Di Plinio |
author_facet | Simone Di Plinio |
author_sort | Simone Di Plinio |
collection | DOAJ |
description | Correlation coefficients are often compared to investigate data across multiple research fields, as they allow investigators to determine different degrees of correlation to independent variables. Even with adequate sample size, such differences may be minor but still scientifically relevant. To date, although much effort has gone into developing methods for estimating differences across correlation coefficients, adequate tools for variable sample sizes and correlational strengths have yet to be tested. The present study evaluated four different methods for detecting the difference between two correlations and tested the adequacy of each method using simulations with multiple data structures. The methods tested were Cohen’s q, Fisher’s method, linear mixed-effects models (LMEM), and an ad hoc developed procedure that integrates bootstrap and effect size estimation. Correlation strengths and sample size was varied across a wide range of simulations to test the power of the methods to reject the null hypothesis (i.e., the two correlations are equal). Results showed that Fisher’s method and the LMEM failed to reject the null hypothesis even in the presence of relevant differences between correlations and that Cohen’s method was not sensitive to the data structure. Bootstrap followed by effect size estimation resulted in a fair, unbiased compromise for estimating quantitative differences between statistical associations and producing outputs that could be easily compared across studies. This unbiased method is easily implementable in MatLab through the bootes function, which was made available online by the author at MathWorks. |
first_indexed | 2024-12-12T08:39:34Z |
format | Article |
id | doaj.art-478f1c5eaf1b44df85b30d9388c73c43 |
institution | Directory Open Access Journal |
issn | 1664-1078 |
language | English |
last_indexed | 2024-12-12T08:39:34Z |
publishDate | 2022-05-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Psychology |
spelling | doaj.art-478f1c5eaf1b44df85b30d9388c73c432022-12-22T00:30:50ZengFrontiers Media S.A.Frontiers in Psychology1664-10782022-05-011310.3389/fpsyg.2022.860213860213Testing the Magnitude of Correlations Across Experimental ConditionsSimone Di PlinioCorrelation coefficients are often compared to investigate data across multiple research fields, as they allow investigators to determine different degrees of correlation to independent variables. Even with adequate sample size, such differences may be minor but still scientifically relevant. To date, although much effort has gone into developing methods for estimating differences across correlation coefficients, adequate tools for variable sample sizes and correlational strengths have yet to be tested. The present study evaluated four different methods for detecting the difference between two correlations and tested the adequacy of each method using simulations with multiple data structures. The methods tested were Cohen’s q, Fisher’s method, linear mixed-effects models (LMEM), and an ad hoc developed procedure that integrates bootstrap and effect size estimation. Correlation strengths and sample size was varied across a wide range of simulations to test the power of the methods to reject the null hypothesis (i.e., the two correlations are equal). Results showed that Fisher’s method and the LMEM failed to reject the null hypothesis even in the presence of relevant differences between correlations and that Cohen’s method was not sensitive to the data structure. Bootstrap followed by effect size estimation resulted in a fair, unbiased compromise for estimating quantitative differences between statistical associations and producing outputs that could be easily compared across studies. This unbiased method is easily implementable in MatLab through the bootes function, which was made available online by the author at MathWorks.https://www.frontiersin.org/articles/10.3389/fpsyg.2022.860213/fullcorrelationbootstrapeffect sizep-valuemixed-effectssample size |
spellingShingle | Simone Di Plinio Testing the Magnitude of Correlations Across Experimental Conditions Frontiers in Psychology correlation bootstrap effect size p-value mixed-effects sample size |
title | Testing the Magnitude of Correlations Across Experimental Conditions |
title_full | Testing the Magnitude of Correlations Across Experimental Conditions |
title_fullStr | Testing the Magnitude of Correlations Across Experimental Conditions |
title_full_unstemmed | Testing the Magnitude of Correlations Across Experimental Conditions |
title_short | Testing the Magnitude of Correlations Across Experimental Conditions |
title_sort | testing the magnitude of correlations across experimental conditions |
topic | correlation bootstrap effect size p-value mixed-effects sample size |
url | https://www.frontiersin.org/articles/10.3389/fpsyg.2022.860213/full |
work_keys_str_mv | AT simonediplinio testingthemagnitudeofcorrelationsacrossexperimentalconditions |