Interpretations of Machine Learning and Their Application to Therapeutic Design

We introduce a framework for interpreting black-box machine learning (ML) models, discover overinterpretation as a failure mode of deep neural networks, and discuss how ML methods can be applied for therapeutic design, including a pan-variant COVID-19 vaccine. While ML models are widely deployed an...

Full description

Bibliographic Details
Main Author: Carter, Brandon M.
Other Authors: Gifford, David K.
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/151487
https://orcid.org/0000-0002-6318-2521
_version_ 1826205178201636864
author Carter, Brandon M.
author2 Gifford, David K.
author_facet Gifford, David K.
Carter, Brandon M.
author_sort Carter, Brandon M.
collection MIT
description We introduce a framework for interpreting black-box machine learning (ML) models, discover overinterpretation as a failure mode of deep neural networks, and discuss how ML methods can be applied for therapeutic design, including a pan-variant COVID-19 vaccine. While ML models are widely deployed and often attain superior accuracy compared to traditional approaches, deep learning models are functionally complex and difficult to interpret, limiting their adoption in high-stakes environments. In addition to safer deployment, model interpretation also aids scientific discovery, where validated ML models trained on experimental data can be used to uncover biological mechanisms or to design therapeutics through biologically faithful objective functions, such as vaccine population coverage. For interpretation of black-box ML models, we introduce the Sufficient Input Subsets (SIS) method that is model-agnostic, faithful to underlying functions, and conceptually straightforward. We demonstrate ML model interpretation with SIS in natural language, computer vision, and computational biological settings. Using the SIS framework, we discover overinterpretation, a novel failure mode of deep neural networks that can hinder generalizability in real-world environments. We posit that overinterpretation results from degenerate signals present in training datasets. Next, using ML models that have been calibrated with experimental immunogenicity data, we develop a flexible framework for the computational design of robust peptide vaccines. Our framework optimizes the n-times coverage of each individual in the population to activate broader T cell immune responses, account for differences in peptide immunogenicity across individuals, and reduce the chance of vaccine escape by mutations. Using this framework, we design vaccines for SARS-CoV-2 that have superior population coverage to published baselines and are conserved across variants of concern. We validate this approach in vivo through a COVID-19 animal challenge study of our vaccine. This thesis demonstrates distinct ways model interpretation enables ML methods to be faithfully deployed in biological settings.
first_indexed 2024-09-23T13:08:38Z
format Thesis
id mit-1721.1/151487
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T13:08:38Z
publishDate 2023
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1514872023-08-01T03:39:27Z Interpretations of Machine Learning and Their Application to Therapeutic Design Carter, Brandon M. Gifford, David K. Jaakkola, Tommi S. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We introduce a framework for interpreting black-box machine learning (ML) models, discover overinterpretation as a failure mode of deep neural networks, and discuss how ML methods can be applied for therapeutic design, including a pan-variant COVID-19 vaccine. While ML models are widely deployed and often attain superior accuracy compared to traditional approaches, deep learning models are functionally complex and difficult to interpret, limiting their adoption in high-stakes environments. In addition to safer deployment, model interpretation also aids scientific discovery, where validated ML models trained on experimental data can be used to uncover biological mechanisms or to design therapeutics through biologically faithful objective functions, such as vaccine population coverage. For interpretation of black-box ML models, we introduce the Sufficient Input Subsets (SIS) method that is model-agnostic, faithful to underlying functions, and conceptually straightforward. We demonstrate ML model interpretation with SIS in natural language, computer vision, and computational biological settings. Using the SIS framework, we discover overinterpretation, a novel failure mode of deep neural networks that can hinder generalizability in real-world environments. We posit that overinterpretation results from degenerate signals present in training datasets. Next, using ML models that have been calibrated with experimental immunogenicity data, we develop a flexible framework for the computational design of robust peptide vaccines. Our framework optimizes the n-times coverage of each individual in the population to activate broader T cell immune responses, account for differences in peptide immunogenicity across individuals, and reduce the chance of vaccine escape by mutations. Using this framework, we design vaccines for SARS-CoV-2 that have superior population coverage to published baselines and are conserved across variants of concern. We validate this approach in vivo through a COVID-19 animal challenge study of our vaccine. This thesis demonstrates distinct ways model interpretation enables ML methods to be faithfully deployed in biological settings. Ph.D. 2023-07-31T19:43:37Z 2023-07-31T19:43:37Z 2023-06 2023-07-13T14:17:04.867Z Thesis https://hdl.handle.net/1721.1/151487 https://orcid.org/0000-0002-6318-2521 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Carter, Brandon M.
Interpretations of Machine Learning and Their Application to Therapeutic Design
title Interpretations of Machine Learning and Their Application to Therapeutic Design
title_full Interpretations of Machine Learning and Their Application to Therapeutic Design
title_fullStr Interpretations of Machine Learning and Their Application to Therapeutic Design
title_full_unstemmed Interpretations of Machine Learning and Their Application to Therapeutic Design
title_short Interpretations of Machine Learning and Their Application to Therapeutic Design
title_sort interpretations of machine learning and their application to therapeutic design
url https://hdl.handle.net/1721.1/151487
https://orcid.org/0000-0002-6318-2521
work_keys_str_mv AT carterbrandonm interpretationsofmachinelearningandtheirapplicationtotherapeuticdesign