Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review

<strong>Objectives<br></strong> In biomedical research, spin is the overinterpretation of findings, and it is a growing concern. To date, the presence of spin has not been evaluated in prognostic model research in oncology, including studies developing and validating models for ind...

Ful tanımlama

Detaylı Bibliyografya
Asıl Yazarlar: Dhiman, P, Ma, J, Andaur Navarro, CL, Speich, B, Bullock, G, Damen, JAA, Hooft, L, Kirtley, S, Riley, RD, Van Calster, B, Moons, KGM, Collins, GS
Materyal Türü: Journal article
Dil:English
Baskı/Yayın Bilgisi: Elsevier 2023
_version_ 1826312024443846656
author Dhiman, P
Ma, J
Andaur Navarro, CL
Speich, B
Bullock, G
Damen, JAA
Hooft, L
Kirtley, S
Riley, RD
Van Calster, B
Moons, KGM
Collins, GS
author_facet Dhiman, P
Ma, J
Andaur Navarro, CL
Speich, B
Bullock, G
Damen, JAA
Hooft, L
Kirtley, S
Riley, RD
Van Calster, B
Moons, KGM
Collins, GS
author_sort Dhiman, P
collection OXFORD
description <strong>Objectives<br></strong> In biomedical research, spin is the overinterpretation of findings, and it is a growing concern. To date, the presence of spin has not been evaluated in prognostic model research in oncology, including studies developing and validating models for individualized risk prediction. <br><strong>Study Design and Setting<br></strong> We conducted a systematic review, searching MEDLINE and EMBASE for oncology-related studies that developed and validated a prognostic model using machine learning published between 1st January, 2019, and 5th September, 2019. We used existing spin frameworks and described areas of highly suggestive spin practices. <br><strong>Results<br></strong> We included 62 publications (including 152 developed models; 37 validated models). Reporting was inconsistent between methods and the results in 27% of studies due to additional analysis and selective reporting. Thirty-two studies (out of 36 applicable studies) reported comparisons between developed models in their discussion and predominantly used discrimination measures to support their claims (78%). Thirty-five studies (56%) used an overly strong or leading word in their title, abstract, results, discussion, or conclusion. <br><strong>Conclusion<br></strong> The potential for spin needs to be considered when reading, interpreting, and using studies that developed and validated prognostic models in oncology. Researchers should carefully report their prognostic model research using words that reflect their actual results and strength of evidence.
first_indexed 2024-03-07T08:20:02Z
format Journal article
id oxford-uuid:5623a15c-1fb9-488d-a3be-c69b3c3b4a61
institution University of Oxford
language English
last_indexed 2024-03-07T08:20:02Z
publishDate 2023
publisher Elsevier
record_format dspace
spelling oxford-uuid:5623a15c-1fb9-488d-a3be-c69b3c3b4a612024-01-29T15:05:07ZOverinterpretation of findings in machine learning prediction model studies in oncology: a systematic reviewJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:5623a15c-1fb9-488d-a3be-c69b3c3b4a61EnglishSymplectic ElementsElsevier2023Dhiman, PMa, JAndaur Navarro, CLSpeich, BBullock, GDamen, JAAHooft, LKirtley, SRiley, RDVan Calster, BMoons, KGMCollins, GS<strong>Objectives<br></strong> In biomedical research, spin is the overinterpretation of findings, and it is a growing concern. To date, the presence of spin has not been evaluated in prognostic model research in oncology, including studies developing and validating models for individualized risk prediction. <br><strong>Study Design and Setting<br></strong> We conducted a systematic review, searching MEDLINE and EMBASE for oncology-related studies that developed and validated a prognostic model using machine learning published between 1st January, 2019, and 5th September, 2019. We used existing spin frameworks and described areas of highly suggestive spin practices. <br><strong>Results<br></strong> We included 62 publications (including 152 developed models; 37 validated models). Reporting was inconsistent between methods and the results in 27% of studies due to additional analysis and selective reporting. Thirty-two studies (out of 36 applicable studies) reported comparisons between developed models in their discussion and predominantly used discrimination measures to support their claims (78%). Thirty-five studies (56%) used an overly strong or leading word in their title, abstract, results, discussion, or conclusion. <br><strong>Conclusion<br></strong> The potential for spin needs to be considered when reading, interpreting, and using studies that developed and validated prognostic models in oncology. Researchers should carefully report their prognostic model research using words that reflect their actual results and strength of evidence.
spellingShingle Dhiman, P
Ma, J
Andaur Navarro, CL
Speich, B
Bullock, G
Damen, JAA
Hooft, L
Kirtley, S
Riley, RD
Van Calster, B
Moons, KGM
Collins, GS
Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
title Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
title_full Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
title_fullStr Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
title_full_unstemmed Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
title_short Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review
title_sort overinterpretation of findings in machine learning prediction model studies in oncology a systematic review
work_keys_str_mv AT dhimanp overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT maj overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT andaurnavarrocl overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT speichb overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT bullockg overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT damenjaa overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT hooftl overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT kirtleys overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT rileyrd overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT vancalsterb overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT moonskgm overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview
AT collinsgs overinterpretationoffindingsinmachinelearningpredictionmodelstudiesinoncologyasystematicreview