A robust estimator of mutual information for deep learning interpretability
We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning (DL) models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced ‘Jimmie’), an algorithm based on Gaussian mixture mod...
Main Authors: | Davide Piras, Hiranya V Peiris, Andrew Pontzen, Luisa Lucie-Smith, Ningyuan Guo, Brian Nord |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2023-01-01
|
Series: | Machine Learning: Science and Technology |
Subjects: | |
Online Access: | https://doi.org/10.1088/2632-2153/acc444 |
Similar Items
-
The Resolved Mutual Information Function as a Structural Fingerprint of Biomolecular Sequences for Interpretable Machine Learning Classifiers
by: Katrin Sophie Bohnsack, et al.
Published: (2021-10-01) -
Interpretability Optimization Method Based on Mutual Transfer of Local Attention Map
by: CHENG Ke-yang, WANG Ning, CUI Hong-gang, ZHAN Yong-zhao
Published: (2022-05-01) -
Interpretable molecular encodings and representations for machine learning tasks
by: Moritz Weckbecker, et al.
Published: (2024-12-01) -
Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance
by: Xing He, et al.
Published: (2023-11-01) -
Real-Time UAV Tracking Through Disentangled Representation With Mutual Information Maximization
by: Hengzhou Ye, et al.
Published: (2024-01-01)