A detailed study of interpretability of deep neural network based top taggers
Recent developments in the methods of explainable artificial intelligence (XAI) allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input–output relationships and realizing how data connects with machine learning models. In this paper w...
Main Authors: | Ayush Khot, Mark S Neubauer, Avik Roy |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2023-01-01
|
Series: | Machine Learning: Science and Technology |
Subjects: | |
Online Access: | https://doi.org/10.1088/2632-2153/ace0a1 |
Similar Items
-
Correcting gradient-based interpretations of deep neural networks for genomics
by: Antonio Majdandzic, et al.
Published: (2023-05-01) -
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research
by: Yi-han Sheu, et al.
Published: (2020-10-01) -
An Interpretable Deep Learning Model for Automatic Sound Classification
by: Pablo Zinemanas, et al.
Published: (2021-04-01) -
Explainability of deep learning models in medical video analysis: a survey
by: Michal Kolarik, et al.
Published: (2023-03-01) -
Effects of Class Imbalance Countermeasures on Interpretability
by: David Cemernek, et al.
Published: (2024-01-01)