Simulations to benchmark time-varying connectivity methods for fMRI.
There is a current interest in quantifying time-varying connectivity (TVC) based on neuroimaging data such as fMRI. Many methods have been proposed, and are being applied, revealing new insight into the brain's dynamics. However, given that the ground truth for TVC in the brain is unknown, many...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2018-05-01
|
Series: | PLoS Computational Biology |
Online Access: | https://doi.org/10.1371/journal.pcbi.1006196 |
_version_ | 1818579021288439808 |
---|---|
author | William Hedley Thompson Craig Geoffrey Richter Pontus Plavén-Sigray Peter Fransson |
author_facet | William Hedley Thompson Craig Geoffrey Richter Pontus Plavén-Sigray Peter Fransson |
author_sort | William Hedley Thompson |
collection | DOAJ |
description | There is a current interest in quantifying time-varying connectivity (TVC) based on neuroimaging data such as fMRI. Many methods have been proposed, and are being applied, revealing new insight into the brain's dynamics. However, given that the ground truth for TVC in the brain is unknown, many concerns remain regarding the accuracy of proposed estimates. Since there exist many TVC methods it is difficult to assess differences in time-varying connectivity between studies. In this paper, we present tvc_benchmarker, which is a Python package containing four simulations to test TVC methods. Here, we evaluate five different methods that together represent a wide spectrum of current approaches to estimating TVC (sliding window, tapered sliding window, multiplication of temporal derivatives, spatial distance and jackknife correlation). These simulations were designed to test each method's ability to track changes in covariance over time, which is a key property in TVC analysis. We found that all tested methods correlated positively with each other, but there were large differences in the strength of the correlations between methods. To facilitate comparisons with future TVC methods, we propose that the described simulations can act as benchmark tests for evaluation of methods. Using tvc_benchmarker researchers can easily add, compare and submit their own TVC methods to evaluate its performance. |
first_indexed | 2024-12-16T06:55:04Z |
format | Article |
id | doaj.art-7af0c71adf814d59acf31ef9496b28e5 |
institution | Directory Open Access Journal |
issn | 1553-734X 1553-7358 |
language | English |
last_indexed | 2024-12-16T06:55:04Z |
publishDate | 2018-05-01 |
publisher | Public Library of Science (PLoS) |
record_format | Article |
series | PLoS Computational Biology |
spelling | doaj.art-7af0c71adf814d59acf31ef9496b28e52022-12-21T22:40:18ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582018-05-01145e100619610.1371/journal.pcbi.1006196Simulations to benchmark time-varying connectivity methods for fMRI.William Hedley ThompsonCraig Geoffrey RichterPontus Plavén-SigrayPeter FranssonThere is a current interest in quantifying time-varying connectivity (TVC) based on neuroimaging data such as fMRI. Many methods have been proposed, and are being applied, revealing new insight into the brain's dynamics. However, given that the ground truth for TVC in the brain is unknown, many concerns remain regarding the accuracy of proposed estimates. Since there exist many TVC methods it is difficult to assess differences in time-varying connectivity between studies. In this paper, we present tvc_benchmarker, which is a Python package containing four simulations to test TVC methods. Here, we evaluate five different methods that together represent a wide spectrum of current approaches to estimating TVC (sliding window, tapered sliding window, multiplication of temporal derivatives, spatial distance and jackknife correlation). These simulations were designed to test each method's ability to track changes in covariance over time, which is a key property in TVC analysis. We found that all tested methods correlated positively with each other, but there were large differences in the strength of the correlations between methods. To facilitate comparisons with future TVC methods, we propose that the described simulations can act as benchmark tests for evaluation of methods. Using tvc_benchmarker researchers can easily add, compare and submit their own TVC methods to evaluate its performance.https://doi.org/10.1371/journal.pcbi.1006196 |
spellingShingle | William Hedley Thompson Craig Geoffrey Richter Pontus Plavén-Sigray Peter Fransson Simulations to benchmark time-varying connectivity methods for fMRI. PLoS Computational Biology |
title | Simulations to benchmark time-varying connectivity methods for fMRI. |
title_full | Simulations to benchmark time-varying connectivity methods for fMRI. |
title_fullStr | Simulations to benchmark time-varying connectivity methods for fMRI. |
title_full_unstemmed | Simulations to benchmark time-varying connectivity methods for fMRI. |
title_short | Simulations to benchmark time-varying connectivity methods for fMRI. |
title_sort | simulations to benchmark time varying connectivity methods for fmri |
url | https://doi.org/10.1371/journal.pcbi.1006196 |
work_keys_str_mv | AT williamhedleythompson simulationstobenchmarktimevaryingconnectivitymethodsforfmri AT craiggeoffreyrichter simulationstobenchmarktimevaryingconnectivitymethodsforfmri AT pontusplavensigray simulationstobenchmarktimevaryingconnectivitymethodsforfmri AT peterfransson simulationstobenchmarktimevaryingconnectivitymethodsforfmri |