Explainable deep convolutional learning for intuitive model development by non–machine learning domain experts
During the design stage, quick and accurate predictions are required for effective design decisions. Model developers prefer simple interpretable models for high computation speed. Given that deep learning (DL) has high computational speed and accuracy, it will be beneficial if these models are expl...
Main Authors: | Sundaravelpandian Singaravel, Johan Suykens, Hans Janssen, Philipp Geyer |
---|---|
Format: | Article |
Language: | English |
Published: |
Cambridge University Press
2020-01-01
|
Series: | Design Science |
Subjects: | |
Online Access: | https://www.cambridge.org/core/product/identifier/S2053470120000220/type/journal_article |
Similar Items
-
Teachers’ learning and assessing of mathematical processes with emphasis on representations, reasoning and proof
by: Satsope Maoto, et al.
Published: (2018-03-01) -
Conclusions from the Commodity Expert Project
by: Stansfield, James L.
Published: (2004) -
Intrinsically Motivated Exploration of Learned Goal Spaces
by: Adrien Laversanne-Finot, et al.
Published: (2021-01-01) -
Learning State-Specific Action Masks for Reinforcement Learning
by: Ziyi Wang, et al.
Published: (2024-01-01) -
Towards Explainability of the Latent Space by Disentangled Representation Learning
by: Ivars Namatēvs, et al.
Published: (2023-11-01)