Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research
Psychiatric research is often confronted with complex abstractions and dynamics that are not readily accessible or well-defined to our perception and measurements, making data-driven methods an appealing approach. Deep neural networks (DNNs) are capable of automatically learning abstractions in the...
Main Author: | Yi-han Sheu |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2020-10-01
|
Series: | Frontiers in Psychiatry |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyt.2020.551299/full |
Similar Items
-
A detailed study of interpretability of deep neural network based top taggers
by: Ayush Khot, et al.
Published: (2023-01-01) -
Understanding the black-box: towards interpretable and reliable deep learning models
by: Tehreem Qamar, et al.
Published: (2023-11-01) -
Correcting gradient-based interpretations of deep neural networks for genomics
by: Antonio Majdandzic, et al.
Published: (2023-05-01) -
Opening the Black-Box: Extracting Medical Reasoning from Machine Learning Predictions
by: Marius FERSIGAN, et al.
Published: (2021-09-01) -
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
by: Vanessa Buhrmester, et al.
Published: (2021-12-01)