Explainable AI via learning to optimize
Abstract Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations wher...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2023-06-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-023-36249-3 |
_version_ | 1797795652114055168 |
---|---|
author | Howard Heaton Samy Wu Fung |
author_facet | Howard Heaton Samy Wu Fung |
author_sort | Howard Heaton |
collection | DOAJ |
description | Abstract Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the “learn to optimize” (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets. Code and additional documentation can be found at https://xai-l2o.research.typal.academy . |
first_indexed | 2024-03-13T03:21:14Z |
format | Article |
id | doaj.art-7ffe34bacc224b498976034b1ce26dcb |
institution | Directory Open Access Journal |
issn | 2045-2322 |
language | English |
last_indexed | 2024-03-13T03:21:14Z |
publishDate | 2023-06-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj.art-7ffe34bacc224b498976034b1ce26dcb2023-06-25T11:17:46ZengNature PortfolioScientific Reports2045-23222023-06-0113111210.1038/s41598-023-36249-3Explainable AI via learning to optimizeHoward Heaton0Samy Wu Fung1Typal AcademyDepartment of Applied Mathematics and Statistics, Colorado School of MinesAbstract Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the “learn to optimize” (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets. Code and additional documentation can be found at https://xai-l2o.research.typal.academy .https://doi.org/10.1038/s41598-023-36249-3 |
spellingShingle | Howard Heaton Samy Wu Fung Explainable AI via learning to optimize Scientific Reports |
title | Explainable AI via learning to optimize |
title_full | Explainable AI via learning to optimize |
title_fullStr | Explainable AI via learning to optimize |
title_full_unstemmed | Explainable AI via learning to optimize |
title_short | Explainable AI via learning to optimize |
title_sort | explainable ai via learning to optimize |
url | https://doi.org/10.1038/s41598-023-36249-3 |
work_keys_str_mv | AT howardheaton explainableaivialearningtooptimize AT samywufung explainableaivialearningtooptimize |