Towards Interpretable Explanations for Transfer Learning in Sequential Tasks
People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable expl...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Association for the Advancement of Artificial Intelligence
2017
|
Online Access: | http://hdl.handle.net/1721.1/106649 https://orcid.org/0000-0001-8239-5963 https://orcid.org/0000-0003-1338-8107 |
_version_ | 1811069714357551104 |
---|---|
author | Ramakrishnan, Ramya Shah, Julie A |
author2 | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics |
author_facet | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Ramakrishnan, Ramya Shah, Julie A |
author_sort | Ramakrishnan, Ramya |
collection | MIT |
description | People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user’s ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care. |
first_indexed | 2024-09-23T08:14:45Z |
format | Article |
id | mit-1721.1/106649 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T08:14:45Z |
publishDate | 2017 |
publisher | Association for the Advancement of Artificial Intelligence |
record_format | dspace |
spelling | mit-1721.1/1066492022-09-30T08:36:36Z Towards Interpretable Explanations for Transfer Learning in Sequential Tasks Ramakrishnan, Ramya Shah, Julie A Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Shah, Julie A Ramakrishnan, Ramya Shah, Julie A People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user’s ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care. 2017-01-27T14:49:58Z 2017-01-27T14:49:58Z 2016-03 Article http://purl.org/eprint/type/ConferencePaper http://hdl.handle.net/1721.1/106649 Ramakrishnan, Ramya and Julie Shah. "Towards Interpretable Explanations for Transfer Learning in Sequential Tasks." AAAI Spring Symposium, March 21-23, 2016, Palo Alto, CA. https://orcid.org/0000-0001-8239-5963 https://orcid.org/0000-0003-1338-8107 en_US www.aaai.org/ocs/index.php/SSS/SSS16/paper/download/12757/11967 AAAI 2016 Spring Symposium Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for the Advancement of Artificial Intelligence Prof. Shah via Barbara Williams |
spellingShingle | Ramakrishnan, Ramya Shah, Julie A Towards Interpretable Explanations for Transfer Learning in Sequential Tasks |
title | Towards Interpretable Explanations for Transfer Learning in Sequential Tasks |
title_full | Towards Interpretable Explanations for Transfer Learning in Sequential Tasks |
title_fullStr | Towards Interpretable Explanations for Transfer Learning in Sequential Tasks |
title_full_unstemmed | Towards Interpretable Explanations for Transfer Learning in Sequential Tasks |
title_short | Towards Interpretable Explanations for Transfer Learning in Sequential Tasks |
title_sort | towards interpretable explanations for transfer learning in sequential tasks |
url | http://hdl.handle.net/1721.1/106649 https://orcid.org/0000-0001-8239-5963 https://orcid.org/0000-0003-1338-8107 |
work_keys_str_mv | AT ramakrishnanramya towardsinterpretableexplanationsfortransferlearninginsequentialtasks AT shahjuliea towardsinterpretableexplanationsfortransferlearninginsequentialtasks |