BLMM: Parallelised computing for big linear mixed models

<p>Within neuroimaging large-scale, shared datasets are becoming increasingly commonplace, challenging existing tools both in terms of overall scale and complexity of the study designs. As sample sizes grow, researchers are presented with new opportunities to detect and account for grouping fa...

Full description

Bibliographic Details
Main Authors: Maullin-Sapey, T, Nichols, TE
Format: Journal article
Language:English
Published: Elsevier 2022
_version_ 1826309499759099904
author Maullin-Sapey, T
Nichols, TE
author_facet Maullin-Sapey, T
Nichols, TE
author_sort Maullin-Sapey, T
collection OXFORD
description <p>Within neuroimaging large-scale, shared datasets are becoming increasingly commonplace, challenging existing tools both in terms of overall scale and complexity of the study designs. As sample sizes grow, researchers are presented with new opportunities to detect and account for grouping factors and covariance structure present in large experimental designs. In particular, standard linear model methods cannot account for the covariance and grouping structures present in large datasets, and the existing linear mixed models (LMM) tools are neither scalable nor exploit the computational speed-ups afforded by vectorisation of computations over voxels. Further, nearly all existing tools for imaging (fixed or mixed effect) do not account for variability in the patterns of missing data near cortical boundaries and the edge of the brain, and instead omit any voxels with any missing data. Yet in the large-<em>n</em> setting, such a voxel-wise deletion missing data strategy leads to severe shrinkage of the final analysis mask. To counter these issues, we describe the “Big” Linear Mixed Models (BLMM) toolbox, an efficient Python package for large-scale fMRI LMM analyses. BLMM is designed for use on high performance computing clusters and utilizes a Fisher Scoring procedure made possible by derivations for the LMM Fisher information matrix and score vectors derived in our previous work, Maullin-Sapey and Nichols (2021).</p>
first_indexed 2024-03-07T07:35:09Z
format Journal article
id oxford-uuid:99175f74-1e87-4a22-8d51-aa82d2068767
institution University of Oxford
language English
last_indexed 2024-03-07T07:35:09Z
publishDate 2022
publisher Elsevier
record_format dspace
spelling oxford-uuid:99175f74-1e87-4a22-8d51-aa82d20687672023-03-07T11:37:02ZBLMM: Parallelised computing for big linear mixed modelsJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:99175f74-1e87-4a22-8d51-aa82d2068767EnglishSymplectic ElementsElsevier2022Maullin-Sapey, TNichols, TE<p>Within neuroimaging large-scale, shared datasets are becoming increasingly commonplace, challenging existing tools both in terms of overall scale and complexity of the study designs. As sample sizes grow, researchers are presented with new opportunities to detect and account for grouping factors and covariance structure present in large experimental designs. In particular, standard linear model methods cannot account for the covariance and grouping structures present in large datasets, and the existing linear mixed models (LMM) tools are neither scalable nor exploit the computational speed-ups afforded by vectorisation of computations over voxels. Further, nearly all existing tools for imaging (fixed or mixed effect) do not account for variability in the patterns of missing data near cortical boundaries and the edge of the brain, and instead omit any voxels with any missing data. Yet in the large-<em>n</em> setting, such a voxel-wise deletion missing data strategy leads to severe shrinkage of the final analysis mask. To counter these issues, we describe the “Big” Linear Mixed Models (BLMM) toolbox, an efficient Python package for large-scale fMRI LMM analyses. BLMM is designed for use on high performance computing clusters and utilizes a Fisher Scoring procedure made possible by derivations for the LMM Fisher information matrix and score vectors derived in our previous work, Maullin-Sapey and Nichols (2021).</p>
spellingShingle Maullin-Sapey, T
Nichols, TE
BLMM: Parallelised computing for big linear mixed models
title BLMM: Parallelised computing for big linear mixed models
title_full BLMM: Parallelised computing for big linear mixed models
title_fullStr BLMM: Parallelised computing for big linear mixed models
title_full_unstemmed BLMM: Parallelised computing for big linear mixed models
title_short BLMM: Parallelised computing for big linear mixed models
title_sort blmm parallelised computing for big linear mixed models
work_keys_str_mv AT maullinsapeyt blmmparallelisedcomputingforbiglinearmixedmodels
AT nicholste blmmparallelisedcomputingforbiglinearmixedmodels