Deficiencies in the publication and reporting of the results of systematic reviews presented at scientific medical conferences

<p style="text-align:justify;"> <b>Objectives:</b> To evaluate the publication and quality of reporting of abstracts of systematic reviews presented at scientific medical conferences.<br/> <b>Study Design and Setting:</b> We included all abstracts of sy...

Full description

Bibliographic Details
Main Authors: Hopewell, S, Boutron, I, Altman, D, Ravaud, P
Format: Journal article
Language:English
Published: Elsevier 2015
Description
Summary:<p style="text-align:justify;"> <b>Objectives:</b> To evaluate the publication and quality of reporting of abstracts of systematic reviews presented at scientific medical conferences.<br/> <b>Study Design and Setting:</b> We included all abstracts of systematic reviews published in the proceedings of nine leading international conferences in 2010. For each conference abstract, we searched PubMed (January 1, 2010, to June 2013) to identify their corresponding full publication. We assessed the extent to which conference abstracts and their corresponding journal abstract reported items included in the Preferred Reporting Items for Systematic reviews and Meta-Analysis for Abstracts checklist and recorded any important discrepancies between sources.<br/> <b>Results:</b> We identified 197 abstracts of systematic reviews, representing &lt;1% of the total number of conference abstracts presented. Of these 53% were published in full, the median time to publication was 14 months (interquartile range, 6.6–20.1 months). Although most conference and journal abstracts reported details of included studies (conference n = 83 of 103; 81% vs. journal n = 81 of 103; 79%), size and direction of effect (76% vs. 75%), and conclusions (79% vs. 81%), many failed to report the date of search (27% vs. 25%), assessment of risk of bias (18% vs. 12%), and the result for the main efficacy outcome(s) including the number of studies (37% vs. 31%) and participants (30% vs. 20%), harms(s) (17% vs. 17%), strengths (17% vs. 13%) and limitations (36% vs. 30%) of the evidence, or funding source (1% vs. 0%). There were discrepancies between journal and corresponding conference abstracts including deletion of studies (13%), changes in reported efficacy (11%), and harm (10%) outcome(s) and changes in the nature or direction of conclusions (24%).<br/> <b>Conclusion:</b> Despite the importance of systematic reviews in the delivery of evidence-based health care, very few are presented at scientific conferences and only half of those presented are published in full. Serious deficiencies in the reporting of abstracts of systematic reviews make it difficult for readers to reliably assess their findings.</p>