Towards Interpretable Explanations for Transfer Learning in Sequential Tasks
People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable expl...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Association for the Advancement of Artificial Intelligence
2017
|
Online Access: | http://hdl.handle.net/1721.1/106649 https://orcid.org/0000-0001-8239-5963 https://orcid.org/0000-0003-1338-8107 |