Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data

Abstract Background There is currently no consensus on the impact of class imbalance methods on the performance of clinical prediction models. We aimed to empirically investigate the impact of random oversampling and random undersampling, two commonly used class imbalance methods, on the internal an...

Full description

Bibliographic Details
Main Authors: Cynthia Yang, Egill A. Fridgeirsson, Jan A. Kors, Jenna M. Reps, Peter R. Rijnbeek
Format: Article
Language:English
Published: SpringerOpen 2024-01-01
Series:Journal of Big Data
Subjects:
Online Access:https://doi.org/10.1186/s40537-023-00857-7
_version_ 1797274277990367232
author Cynthia Yang
Egill A. Fridgeirsson
Jan A. Kors
Jenna M. Reps
Peter R. Rijnbeek
author_facet Cynthia Yang
Egill A. Fridgeirsson
Jan A. Kors
Jenna M. Reps
Peter R. Rijnbeek
author_sort Cynthia Yang
collection DOAJ
description Abstract Background There is currently no consensus on the impact of class imbalance methods on the performance of clinical prediction models. We aimed to empirically investigate the impact of random oversampling and random undersampling, two commonly used class imbalance methods, on the internal and external validation performance of prediction models developed using observational health data. Methods We developed and externally validated prediction models for various outcomes of interest within a target population of people with pharmaceutically treated depression across four large observational health databases. We used three different classifiers (lasso logistic regression, random forest, XGBoost) and varied the target imbalance ratio. We evaluated the impact on model performance in terms of discrimination and calibration. Discrimination was assessed using the area under the receiver operating characteristic curve (AUROC) and calibration was assessed using calibration plots. Results We developed and externally validated a total of 1,566 prediction models. On internal and external validation, random oversampling and random undersampling generally did not result in higher AUROCs. Moreover, we found overestimated risks, although this miscalibration could largely be corrected by recalibrating the models towards the imbalance ratios in the original dataset. Conclusions Overall, we found that random oversampling or random undersampling generally does not improve the internal and external validation performance of prediction models developed in large observational health databases. Based on our findings, we do not recommend applying random oversampling or random undersampling when developing prediction models in large observational health databases.
first_indexed 2024-03-07T14:57:10Z
format Article
id doaj.art-16e1370b11874d648d085a5e82d580c9
institution Directory Open Access Journal
issn 2196-1115
language English
last_indexed 2024-03-07T14:57:10Z
publishDate 2024-01-01
publisher SpringerOpen
record_format Article
series Journal of Big Data
spelling doaj.art-16e1370b11874d648d085a5e82d580c92024-03-05T19:22:29ZengSpringerOpenJournal of Big Data2196-11152024-01-0111111710.1186/s40537-023-00857-7Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health dataCynthia Yang0Egill A. Fridgeirsson1Jan A. Kors2Jenna M. Reps3Peter R. Rijnbeek4Department of Medical Informatics, Erasmus University Medical CenterDepartment of Medical Informatics, Erasmus University Medical CenterDepartment of Medical Informatics, Erasmus University Medical CenterObservational Health Data Analytics, Janssen Research and DevelopmentDepartment of Medical Informatics, Erasmus University Medical CenterAbstract Background There is currently no consensus on the impact of class imbalance methods on the performance of clinical prediction models. We aimed to empirically investigate the impact of random oversampling and random undersampling, two commonly used class imbalance methods, on the internal and external validation performance of prediction models developed using observational health data. Methods We developed and externally validated prediction models for various outcomes of interest within a target population of people with pharmaceutically treated depression across four large observational health databases. We used three different classifiers (lasso logistic regression, random forest, XGBoost) and varied the target imbalance ratio. We evaluated the impact on model performance in terms of discrimination and calibration. Discrimination was assessed using the area under the receiver operating characteristic curve (AUROC) and calibration was assessed using calibration plots. Results We developed and externally validated a total of 1,566 prediction models. On internal and external validation, random oversampling and random undersampling generally did not result in higher AUROCs. Moreover, we found overestimated risks, although this miscalibration could largely be corrected by recalibrating the models towards the imbalance ratios in the original dataset. Conclusions Overall, we found that random oversampling or random undersampling generally does not improve the internal and external validation performance of prediction models developed in large observational health databases. Based on our findings, we do not recommend applying random oversampling or random undersampling when developing prediction models in large observational health databases.https://doi.org/10.1186/s40537-023-00857-7Patient-level predictionClinical prediction modelClass Imbalance ProblemMachine learningExternal validationClinical decision support
spellingShingle Cynthia Yang
Egill A. Fridgeirsson
Jan A. Kors
Jenna M. Reps
Peter R. Rijnbeek
Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
Journal of Big Data
Patient-level prediction
Clinical prediction model
Class Imbalance Problem
Machine learning
External validation
Clinical decision support
title Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
title_full Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
title_fullStr Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
title_full_unstemmed Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
title_short Impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
title_sort impact of random oversampling and random undersampling on the performance of prediction models developed using observational health data
topic Patient-level prediction
Clinical prediction model
Class Imbalance Problem
Machine learning
External validation
Clinical decision support
url https://doi.org/10.1186/s40537-023-00857-7
work_keys_str_mv AT cynthiayang impactofrandomoversamplingandrandomundersamplingontheperformanceofpredictionmodelsdevelopedusingobservationalhealthdata
AT egillafridgeirsson impactofrandomoversamplingandrandomundersamplingontheperformanceofpredictionmodelsdevelopedusingobservationalhealthdata
AT janakors impactofrandomoversamplingandrandomundersamplingontheperformanceofpredictionmodelsdevelopedusingobservationalhealthdata
AT jennamreps impactofrandomoversamplingandrandomundersamplingontheperformanceofpredictionmodelsdevelopedusingobservationalhealthdata
AT peterrrijnbeek impactofrandomoversamplingandrandomundersamplingontheperformanceofpredictionmodelsdevelopedusingobservationalhealthdata