Identifying unreliable predictions in clinical risk models

© 2020, The Author(s). The ability to identify patients who are likely to have an adverse outcome is an essential component of good clinical care. Therefore, predictive risk stratification models play an important role in clinical decision making. Determining whether a given predictive model is suit...

Full description

Bibliographic Details
Main Authors: Myers, Paul D, Ng, Kenney, Severson, Kristen, Kartoun, Uri, Dai, Wangzhi, Huang, Wei, Anderson, Frederick A, Stultz, Collin M
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: Springer Science and Business Media LLC 2021
Online Access:https://hdl.handle.net/1721.1/133649
_version_ 1826200308480475136
author Myers, Paul D
Ng, Kenney
Severson, Kristen
Kartoun, Uri
Dai, Wangzhi
Huang, Wei
Anderson, Frederick A
Stultz, Collin M
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Myers, Paul D
Ng, Kenney
Severson, Kristen
Kartoun, Uri
Dai, Wangzhi
Huang, Wei
Anderson, Frederick A
Stultz, Collin M
author_sort Myers, Paul D
collection MIT
description © 2020, The Author(s). The ability to identify patients who are likely to have an adverse outcome is an essential component of good clinical care. Therefore, predictive risk stratification models play an important role in clinical decision making. Determining whether a given predictive model is suitable for clinical use usually involves evaluating the model’s performance on large patient datasets using standard statistical measures of success (e.g., accuracy, discriminatory ability). However, as these metrics correspond to averages over patients who have a range of different characteristics, it is difficult to discern whether an individual prediction on a given patient should be trusted using these measures alone. In this paper, we introduce a new method for identifying patient subgroups where a predictive model is expected to be poor, thereby highlighting when a given prediction is misleading and should not be trusted. The resulting “unreliability score” can be computed for any clinical risk model and is suitable in the setting of large class imbalance, a situation often encountered in healthcare settings. Using data from more than 40,000 patients in the Global Registry of Acute Coronary Events (GRACE), we demonstrate that patients with high unreliability scores form a subgroup in which the predictive model has both decreased accuracy and decreased discriminatory ability.
first_indexed 2024-09-23T11:34:29Z
format Article
id mit-1721.1/133649
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T11:34:29Z
publishDate 2021
publisher Springer Science and Business Media LLC
record_format dspace
spelling mit-1721.1/1336492024-03-19T13:39:58Z Identifying unreliable predictions in clinical risk models Myers, Paul D Ng, Kenney Severson, Kristen Kartoun, Uri Dai, Wangzhi Huang, Wei Anderson, Frederick A Stultz, Collin M Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Research Laboratory of Electronics Massachusetts Institute of Technology. Institute for Medical Engineering & Science © 2020, The Author(s). The ability to identify patients who are likely to have an adverse outcome is an essential component of good clinical care. Therefore, predictive risk stratification models play an important role in clinical decision making. Determining whether a given predictive model is suitable for clinical use usually involves evaluating the model’s performance on large patient datasets using standard statistical measures of success (e.g., accuracy, discriminatory ability). However, as these metrics correspond to averages over patients who have a range of different characteristics, it is difficult to discern whether an individual prediction on a given patient should be trusted using these measures alone. In this paper, we introduce a new method for identifying patient subgroups where a predictive model is expected to be poor, thereby highlighting when a given prediction is misleading and should not be trusted. The resulting “unreliability score” can be computed for any clinical risk model and is suitable in the setting of large class imbalance, a situation often encountered in healthcare settings. Using data from more than 40,000 patients in the Global Registry of Acute Coronary Events (GRACE), we demonstrate that patients with high unreliability scores form a subgroup in which the predictive model has both decreased accuracy and decreased discriminatory ability. 2021-10-27T19:54:00Z 2021-10-27T19:54:00Z 2020 2021-02-03T17:18:49Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/133649 en 10.1038/S41746-019-0209-7 npj Digital Medicine Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Springer Science and Business Media LLC Nature
spellingShingle Myers, Paul D
Ng, Kenney
Severson, Kristen
Kartoun, Uri
Dai, Wangzhi
Huang, Wei
Anderson, Frederick A
Stultz, Collin M
Identifying unreliable predictions in clinical risk models
title Identifying unreliable predictions in clinical risk models
title_full Identifying unreliable predictions in clinical risk models
title_fullStr Identifying unreliable predictions in clinical risk models
title_full_unstemmed Identifying unreliable predictions in clinical risk models
title_short Identifying unreliable predictions in clinical risk models
title_sort identifying unreliable predictions in clinical risk models
url https://hdl.handle.net/1721.1/133649
work_keys_str_mv AT myerspauld identifyingunreliablepredictionsinclinicalriskmodels
AT ngkenney identifyingunreliablepredictionsinclinicalriskmodels
AT seversonkristen identifyingunreliablepredictionsinclinicalriskmodels
AT kartounuri identifyingunreliablepredictionsinclinicalriskmodels
AT daiwangzhi identifyingunreliablepredictionsinclinicalriskmodels
AT huangwei identifyingunreliablepredictionsinclinicalriskmodels
AT andersonfredericka identifyingunreliablepredictionsinclinicalriskmodels
AT stultzcollinm identifyingunreliablepredictionsinclinicalriskmodels