Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models le...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-12-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/3/4/48 |
_version_ | 1797502875698462720 |
---|---|
author | Vanessa Buhrmester David Münch Michael Arens |
author_facet | Vanessa Buhrmester David Münch Michael Arens |
author_sort | Vanessa Buhrmester |
collection | DOAJ |
description | Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas. |
first_indexed | 2024-03-10T03:42:24Z |
format | Article |
id | doaj.art-db49392b33284f79b3f5ee87b0fffa83 |
institution | Directory Open Access Journal |
issn | 2504-4990 |
language | English |
last_indexed | 2024-03-10T03:42:24Z |
publishDate | 2021-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Machine Learning and Knowledge Extraction |
spelling | doaj.art-db49392b33284f79b3f5ee87b0fffa832023-11-23T09:17:41ZengMDPI AGMachine Learning and Knowledge Extraction2504-49902021-12-013496698910.3390/make3040048Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A SurveyVanessa Buhrmester0David Münch1Michael Arens2Fraunhofer IOSB, Gutleuthausstraße 1, 76275 Ettlingen, GermanyFraunhofer IOSB, Gutleuthausstraße 1, 76275 Ettlingen, GermanyFraunhofer IOSB, Gutleuthausstraße 1, 76275 Ettlingen, GermanyDeep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.https://www.mdpi.com/2504-4990/3/4/48interpretabilityexplainerexplanatorexplainable AItrustethics |
spellingShingle | Vanessa Buhrmester David Münch Michael Arens Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey Machine Learning and Knowledge Extraction interpretability explainer explanator explainable AI trust ethics |
title | Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey |
title_full | Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey |
title_fullStr | Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey |
title_full_unstemmed | Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey |
title_short | Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey |
title_sort | analysis of explainers of black box deep neural networks for computer vision a survey |
topic | interpretability explainer explanator explainable AI trust ethics |
url | https://www.mdpi.com/2504-4990/3/4/48 |
work_keys_str_mv | AT vanessabuhrmester analysisofexplainersofblackboxdeepneuralnetworksforcomputervisionasurvey AT davidmunch analysisofexplainersofblackboxdeepneuralnetworksforcomputervisionasurvey AT michaelarens analysisofexplainersofblackboxdeepneuralnetworksforcomputervisionasurvey |