Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning

Interactive machine learning (IML) enables the incorporation of human expertise because the human participates in the construction of the learned model. Moreover, with human-in-the-loop machine learning (HITL-ML), the human experts drive the learning, and they can steer the learning objective not on...

Full description

Bibliographic Details
Main Authors: Vladimir Estivill-Castro, Eugene Gilmore, René Hexel
Format: Article
Language:English
Published: MDPI AG 2022-09-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/13/10/464
_version_ 1797472593784078336
author Vladimir Estivill-Castro
Eugene Gilmore
René Hexel
author_facet Vladimir Estivill-Castro
Eugene Gilmore
René Hexel
author_sort Vladimir Estivill-Castro
collection DOAJ
description Interactive machine learning (IML) enables the incorporation of human expertise because the human participates in the construction of the learned model. Moreover, with human-in-the-loop machine learning (HITL-ML), the human experts drive the learning, and they can steer the learning objective not only for accuracy but perhaps for characterisation and discrimination rules, where separating one class from others is the primary objective. Moreover, this interaction enables humans to explore and gain insights into the dataset as well as validate the learned models. Validation requires transparency and interpretable classifiers. The huge relevance of understandable classification has been recently emphasised for many applications under the banner of explainable artificial intelligence (XAI). We use parallel coordinates to deploy an IML system that enables the visualisation of decision tree classifiers but also the generation of interpretable splits beyond parallel axis splits. Moreover, we show that characterisation and discrimination rules are also well communicated using parallel coordinates. In particular, we report results from the largest usability study of a IML system, confirming the merits of our approach.
first_indexed 2024-03-09T20:03:34Z
format Article
id doaj.art-21fe271b75c24550a55849cb5a54864d
institution Directory Open Access Journal
issn 2078-2489
language English
last_indexed 2024-03-09T20:03:34Z
publishDate 2022-09-01
publisher MDPI AG
record_format Article
series Information
spelling doaj.art-21fe271b75c24550a55849cb5a54864d2023-11-24T00:35:59ZengMDPI AGInformation2078-24892022-09-01131046410.3390/info13100464Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine LearningVladimir Estivill-Castro0Eugene Gilmore1René Hexel2Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08018 Barcelona, SpainSchool of Information and Communication Technology, Griffith University, Brisbane 4111, AustraliaSchool of Information and Communication Technology, Griffith University, Brisbane 4111, AustraliaInteractive machine learning (IML) enables the incorporation of human expertise because the human participates in the construction of the learned model. Moreover, with human-in-the-loop machine learning (HITL-ML), the human experts drive the learning, and they can steer the learning objective not only for accuracy but perhaps for characterisation and discrimination rules, where separating one class from others is the primary objective. Moreover, this interaction enables humans to explore and gain insights into the dataset as well as validate the learned models. Validation requires transparency and interpretable classifiers. The huge relevance of understandable classification has been recently emphasised for many applications under the banner of explainable artificial intelligence (XAI). We use parallel coordinates to deploy an IML system that enables the visualisation of decision tree classifiers but also the generation of interpretable splits beyond parallel axis splits. Moreover, we show that characterisation and discrimination rules are also well communicated using parallel coordinates. In particular, we report results from the largest usability study of a IML system, confirming the merits of our approach.https://www.mdpi.com/2078-2489/13/10/464interactive machine learningdecision tree classifierstransparent-by-designparallel coordinates
spellingShingle Vladimir Estivill-Castro
Eugene Gilmore
René Hexel
Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning
Information
interactive machine learning
decision tree classifiers
transparent-by-design
parallel coordinates
title Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning
title_full Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning
title_fullStr Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning
title_full_unstemmed Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning
title_short Constructing Explainable Classifiers from the Start—Enabling Human-in-the Loop Machine Learning
title_sort constructing explainable classifiers from the start enabling human in the loop machine learning
topic interactive machine learning
decision tree classifiers
transparent-by-design
parallel coordinates
url https://www.mdpi.com/2078-2489/13/10/464
work_keys_str_mv AT vladimirestivillcastro constructingexplainableclassifiersfromthestartenablinghumanintheloopmachinelearning
AT eugenegilmore constructingexplainableclassifiersfromthestartenablinghumanintheloopmachinelearning
AT renehexel constructingexplainableclassifiersfromthestartenablinghumanintheloopmachinelearning