Automated Interpretation of Machine Learning Models
As machine learning (ML) models are increasingly deployed in production, there’s a pressing need to ensure their reliability through auditing, debugging, and testing. Interpretability, the subfield that studies how ML models make decisions, aspires to meet this need but traditionally relies on human...
Main Author: | Hernandez, Evan |
---|---|
Other Authors: | Andreas, Jacob |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2024
|
Online Access: | https://hdl.handle.net/1721.1/156277 |
Similar Items
-
Unsupervised Machine Learning Applied to Seismic Interpretation: Towards an Unsupervised Automated Interpretation Tool
by: Alimed Celecia, et al.
Published: (2021-09-01) -
An Automated and Interpretable Machine Learning Scheme for Power System Transient Stability Assessment
by: Fang Liu, et al.
Published: (2023-02-01) -
Interpretable Models in Probabilistic Machine Learning
by: Kim, H
Published: (2019) -
NASPY: automated extraction of automated machine learning models
by: Lou, Xiaoxuan, et al.
Published: (2023) -
Towards Interpretable Machine Learning for Automated Damage Detection Based on Ultrasonic Guided Waves
by: Christopher Schnur, et al.
Published: (2022-01-01)