TimeREISE: Time Series Randomized Evolving Input Sample Explanation
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most inter...
Main Authors: | Dominique Mercier, Andreas Dengel, Sheraz Ahmed |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-05-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/11/4084 |
Similar Items
-
TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
by: Shoaib Ahmed Siddiqui, et al.
Published: (2021-11-01) -
Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI
by: Adriano Lucieri, et al.
Published: (2023-07-01) -
TSViz: Demystification of Deep Learning Models for Time-Series Analysis
by: Shoaib Ahmed Siddiqui, et al.
Published: (2019-01-01) -
The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods
by: Saifullah Saifullah, et al.
Published: (2024-07-01) -
Comparison and Explanation of Forecasting Algorithms for Energy Time Series
by: Yuyi Zhang, et al.
Published: (2021-11-01)