Interpretability With Accurate Small Models
Models often need to be constrained to a certain size for them to be considered interpretable. For example, a decision tree of depth 5 is much easier to understand than one of depth 50. Limiting model size, however, often reduces accuracy. We suggest a practical technique that minimizes this trade-o...
Main Authors: | Abhishek Ghose, Balaraman Ravindran |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2020-02-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/frai.2020.00003/full |
Similar Items
-
A Tutorial on Levels of Granularity: From Histograms to Clusters to Predictive Distributions
by: STANLEY L. SCLOVE
Published: (2018-06-01) -
Interpretable spatio-temporal modeling for soil temperature prediction
by: Xiaoning Li, et al.
Published: (2023-12-01) -
An Automated and Interpretable Machine Learning Scheme for Power System Transient Stability Assessment
by: Fang Liu, et al.
Published: (2023-02-01) -
Intrinsically Interpretable Gaussian Mixture Model
by: Nourah Alangari, et al.
Published: (2023-03-01) -
Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics
by: Amirata Ghorbani, et al.
Published: (2021-12-01)