Optimizing risk-based breast cancer screening policies with reinforcement learning
Screening programs must balance the benefit of early detection with the cost of overscreening. Here, we introduce a novel reinforcement learning-based framework for personalized screening, Tempo, and demonstrate its efficacy in the context of breast cancer. We trained our risk-based screening polici...
Main Authors: | , , , , , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer Science and Business Media LLC
2022
|
Online Access: | https://hdl.handle.net/1721.1/142737 |
_version_ | 1826211118573420544 |
---|---|
author | Yala, Adam Mikhael, Peter G Lehman, Constance Lin, Gigin Strand, Fredrik Wan, Yung-Liang Hughes, Kevin Satuluru, Siddharth Kim, Thomas Banerjee, Imon Gichoya, Judy Trivedi, Hari Barzilay, Regina |
author2 | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
author_facet | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Yala, Adam Mikhael, Peter G Lehman, Constance Lin, Gigin Strand, Fredrik Wan, Yung-Liang Hughes, Kevin Satuluru, Siddharth Kim, Thomas Banerjee, Imon Gichoya, Judy Trivedi, Hari Barzilay, Regina |
author_sort | Yala, Adam |
collection | MIT |
description | Screening programs must balance the benefit of early detection with the cost of overscreening. Here, we introduce a novel reinforcement learning-based framework for personalized screening, Tempo, and demonstrate its efficacy in the context of breast cancer. We trained our risk-based screening policies on a large screening mammography dataset from Massachusetts General Hospital (MGH; USA) and validated this dataset in held-out patients from MGH and external datasets from Emory University (Emory; USA), Karolinska Institute (Karolinska; Sweden) and Chang Gung Memorial Hospital (CGMH; Taiwan). Across all test sets, we find that the Tempo policy combined with an image-based artificial intelligence (AI) risk model is significantly more efficient than current regimens used in clinical practice in terms of simulated early detection per screen frequency. Moreover, we show that the same Tempo policy can be easily adapted to a wide range of possible screening preferences, allowing clinicians to select their desired trade-off between early detection and screening costs without training new policies. Finally, we demonstrate that Tempo policies based on AI-based risk models outperform Tempo policies based on less accurate clinical risk models. Altogether, our results show that pairing AI-based risk models with agile AI-designed screening policies has the potential to improve screening programs by advancing early detection while reducing overscreening. |
first_indexed | 2024-09-23T15:00:52Z |
format | Article |
id | mit-1721.1/142737 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T15:00:52Z |
publishDate | 2022 |
publisher | Springer Science and Business Media LLC |
record_format | dspace |
spelling | mit-1721.1/1427372023-04-14T16:29:46Z Optimizing risk-based breast cancer screening policies with reinforcement learning Yala, Adam Mikhael, Peter G Lehman, Constance Lin, Gigin Strand, Fredrik Wan, Yung-Liang Hughes, Kevin Satuluru, Siddharth Kim, Thomas Banerjee, Imon Gichoya, Judy Trivedi, Hari Barzilay, Regina Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Screening programs must balance the benefit of early detection with the cost of overscreening. Here, we introduce a novel reinforcement learning-based framework for personalized screening, Tempo, and demonstrate its efficacy in the context of breast cancer. We trained our risk-based screening policies on a large screening mammography dataset from Massachusetts General Hospital (MGH; USA) and validated this dataset in held-out patients from MGH and external datasets from Emory University (Emory; USA), Karolinska Institute (Karolinska; Sweden) and Chang Gung Memorial Hospital (CGMH; Taiwan). Across all test sets, we find that the Tempo policy combined with an image-based artificial intelligence (AI) risk model is significantly more efficient than current regimens used in clinical practice in terms of simulated early detection per screen frequency. Moreover, we show that the same Tempo policy can be easily adapted to a wide range of possible screening preferences, allowing clinicians to select their desired trade-off between early detection and screening costs without training new policies. Finally, we demonstrate that Tempo policies based on AI-based risk models outperform Tempo policies based on less accurate clinical risk models. Altogether, our results show that pairing AI-based risk models with agile AI-designed screening policies has the potential to improve screening programs by advancing early detection while reducing overscreening. 2022-05-25T18:40:35Z 2022-05-25T18:40:35Z 2022-01 2022-05-25T18:19:39Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/142737 Yala, Adam, Mikhael, Peter G, Lehman, Constance, Lin, Gigin, Strand, Fredrik et al. 2022. "Optimizing risk-based breast cancer screening policies with reinforcement learning." Nature Medicine, 28 (1). en 10.1038/s41591-021-01599-w Nature Medicine Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International https://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Springer Science and Business Media LLC Other Repository |
spellingShingle | Yala, Adam Mikhael, Peter G Lehman, Constance Lin, Gigin Strand, Fredrik Wan, Yung-Liang Hughes, Kevin Satuluru, Siddharth Kim, Thomas Banerjee, Imon Gichoya, Judy Trivedi, Hari Barzilay, Regina Optimizing risk-based breast cancer screening policies with reinforcement learning |
title | Optimizing risk-based breast cancer screening policies with reinforcement learning |
title_full | Optimizing risk-based breast cancer screening policies with reinforcement learning |
title_fullStr | Optimizing risk-based breast cancer screening policies with reinforcement learning |
title_full_unstemmed | Optimizing risk-based breast cancer screening policies with reinforcement learning |
title_short | Optimizing risk-based breast cancer screening policies with reinforcement learning |
title_sort | optimizing risk based breast cancer screening policies with reinforcement learning |
url | https://hdl.handle.net/1721.1/142737 |
work_keys_str_mv | AT yalaadam optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT mikhaelpeterg optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT lehmanconstance optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT lingigin optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT strandfredrik optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT wanyungliang optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT hugheskevin optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT satulurusiddharth optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT kimthomas optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT banerjeeimon optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT gichoyajudy optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT trivedihari optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning AT barzilayregina optimizingriskbasedbreastcancerscreeningpolicieswithreinforcementlearning |