A robust estimator of mutual information for deep learning interpretability

We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning (DL) models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced ‘Jimmie’), an algorithm based on Gaussian mixture mod...

Full description

Bibliographic Details
Main Authors: Davide Piras, Hiranya V Peiris, Andrew Pontzen, Luisa Lucie-Smith, Ningyuan Guo, Brian Nord
Format: Article
Language:English
Published: IOP Publishing 2023-01-01
Series:Machine Learning: Science and Technology
Subjects:
Online Access:https://doi.org/10.1088/2632-2153/acc444
_version_ 1797844611258908672
author Davide Piras
Hiranya V Peiris
Andrew Pontzen
Luisa Lucie-Smith
Ningyuan Guo
Brian Nord
author_facet Davide Piras
Hiranya V Peiris
Andrew Pontzen
Luisa Lucie-Smith
Ningyuan Guo
Brian Nord
author_sort Davide Piras
collection DOAJ
description We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning (DL) models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced ‘Jimmie’), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established MI estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train DL models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available in this GitHub repository.
first_indexed 2024-04-09T17:25:05Z
format Article
id doaj.art-8563df984bc14bcc9d888b2478eaf82f
institution Directory Open Access Journal
issn 2632-2153
language English
last_indexed 2024-04-09T17:25:05Z
publishDate 2023-01-01
publisher IOP Publishing
record_format Article
series Machine Learning: Science and Technology
spelling doaj.art-8563df984bc14bcc9d888b2478eaf82f2023-04-18T13:53:05ZengIOP PublishingMachine Learning: Science and Technology2632-21532023-01-014202500610.1088/2632-2153/acc444A robust estimator of mutual information for deep learning interpretabilityDavide Piras0https://orcid.org/0000-0002-9836-2661Hiranya V Peiris1Andrew Pontzen2Luisa Lucie-Smith3Ningyuan Guo4Brian Nord5Department of Physics & Astronomy, University College London , Gower Street, London WC1E 6BT, United Kingdom; Département de Physique Théorique, Université de Genève , 24 Quai Ernest Ansermet, 1211 Genève 4, SwitzerlandDepartment of Physics & Astronomy, University College London , Gower Street, London WC1E 6BT, United Kingdom; The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University , AlbaNova, Stockholm SE-10691, SwedenDepartment of Physics & Astronomy, University College London , Gower Street, London WC1E 6BT, United KingdomMax-Planck-Institut für Astrophysik , Karl-Schwarzschild-Str. 1, 85748 Garching, GermanyDepartment of Physics & Astronomy, University College London , Gower Street, London WC1E 6BT, United KingdomFermi National Accelerator Laboratory , PO Box 500, Batavia, IL 60510, United States of America; Department of Astronomy & Astrophysics, University of Chicago , Chicago, IL 60637, United States of America; Kavli Institute for Cosmological Physics, University of Chicago , Chicago, IL 60637, United States of AmericaWe develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning (DL) models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced ‘Jimmie’), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established MI estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train DL models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available in this GitHub repository.https://doi.org/10.1088/2632-2153/acc444deep learningmutual informationinterpretabilityrepresentation learning
spellingShingle Davide Piras
Hiranya V Peiris
Andrew Pontzen
Luisa Lucie-Smith
Ningyuan Guo
Brian Nord
A robust estimator of mutual information for deep learning interpretability
Machine Learning: Science and Technology
deep learning
mutual information
interpretability
representation learning
title A robust estimator of mutual information for deep learning interpretability
title_full A robust estimator of mutual information for deep learning interpretability
title_fullStr A robust estimator of mutual information for deep learning interpretability
title_full_unstemmed A robust estimator of mutual information for deep learning interpretability
title_short A robust estimator of mutual information for deep learning interpretability
title_sort robust estimator of mutual information for deep learning interpretability
topic deep learning
mutual information
interpretability
representation learning
url https://doi.org/10.1088/2632-2153/acc444
work_keys_str_mv AT davidepiras arobustestimatorofmutualinformationfordeeplearninginterpretability
AT hiranyavpeiris arobustestimatorofmutualinformationfordeeplearninginterpretability
AT andrewpontzen arobustestimatorofmutualinformationfordeeplearninginterpretability
AT luisaluciesmith arobustestimatorofmutualinformationfordeeplearninginterpretability
AT ningyuanguo arobustestimatorofmutualinformationfordeeplearninginterpretability
AT briannord arobustestimatorofmutualinformationfordeeplearninginterpretability
AT davidepiras robustestimatorofmutualinformationfordeeplearninginterpretability
AT hiranyavpeiris robustestimatorofmutualinformationfordeeplearninginterpretability
AT andrewpontzen robustestimatorofmutualinformationfordeeplearninginterpretability
AT luisaluciesmith robustestimatorofmutualinformationfordeeplearninginterpretability
AT ningyuanguo robustestimatorofmutualinformationfordeeplearninginterpretability
AT briannord robustestimatorofmutualinformationfordeeplearninginterpretability