Correcting for batch effects in case-control microbiome studies

High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome se...

Full description

Bibliographic Details
Main Authors: Gibbons, Sean Michael, Duvallet, Claire, Alm, Eric J
Other Authors: Massachusetts Institute of Technology. Department of Biological Engineering
Format: Article
Published: Public Library of Science (PLoS) 2018
Online Access:http://hdl.handle.net/1721.1/117510
https://orcid.org/0000-0002-8093-8394
https://orcid.org/0000-0001-8294-9364
Description
Summary:High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses.