Quantifying Actionability: Evaluating Human-Recipient Models
With the increasing use of machine learning and artificial intelligence(ML/AI) to inform decisions, there is a need to evaluate models beyond the traditional metrics, and not just from the perspective of the issuer-user (I-user) commissioning them but also for the recipient-user (R-user) impacted by...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10286013/ |
_version_ | 1797635410920210432 |
---|---|
author | Nwaike Kelechi Licheng Jiao |
author_facet | Nwaike Kelechi Licheng Jiao |
author_sort | Nwaike Kelechi |
collection | DOAJ |
description | With the increasing use of machine learning and artificial intelligence(ML/AI) to inform decisions, there is a need to evaluate models beyond the traditional metrics, and not just from the perspective of the issuer-user (I-user) commissioning them but also for the recipient-user (R-user) impacted by their decisions. We propose evaluating R-user-focused actionability - the degree to which the R-user can influence future model predictions through feasible, responsible actions aligning with the I-user’s goals. We present an algorithm to categorize features as actionable, non-actionable, or conditionally non-actionable based on mutability and cost to the R-user. Experiments were carried out using tree models paired with SHAP and permutation feature importance on tabular datasets. Our key findings indicate noteworthy differences in global actionability across the different datasets, even in datasets that are purposed towards similar goals, and observable but less significant differences among the different model-interpreter combinations applied to the same datasets. Results suggest actionability depends on the entire pipeline, from problem definition and data selection to model choice and explanation method, that it provides a meaningful signal for model selection in valid use cases and merits further research across diverse real-world datasets. The research extends ideas of local and global model explainability to model actionability from the R-user perspective. Actionability evaluations can empower accountable, trustworthy A.I. and provide incentives for serving R-users, not just issuers. |
first_indexed | 2024-03-11T12:20:45Z |
format | Article |
id | doaj.art-5e4093eb7bd947abbeb989cf1f7452dd |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-11T12:20:45Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-5e4093eb7bd947abbeb989cf1f7452dd2023-11-07T00:02:13ZengIEEEIEEE Access2169-35362023-01-011111981111982310.1109/ACCESS.2023.332490610286013Quantifying Actionability: Evaluating Human-Recipient ModelsNwaike Kelechi0https://orcid.org/0000-0003-1199-2642Licheng Jiao1https://orcid.org/0000-0003-3354-9617Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, School of Artificial Intelligence, Xidian University, Xi’an, ChinaKey Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, School of Artificial Intelligence, Xidian University, Xi’an, ChinaWith the increasing use of machine learning and artificial intelligence(ML/AI) to inform decisions, there is a need to evaluate models beyond the traditional metrics, and not just from the perspective of the issuer-user (I-user) commissioning them but also for the recipient-user (R-user) impacted by their decisions. We propose evaluating R-user-focused actionability - the degree to which the R-user can influence future model predictions through feasible, responsible actions aligning with the I-user’s goals. We present an algorithm to categorize features as actionable, non-actionable, or conditionally non-actionable based on mutability and cost to the R-user. Experiments were carried out using tree models paired with SHAP and permutation feature importance on tabular datasets. Our key findings indicate noteworthy differences in global actionability across the different datasets, even in datasets that are purposed towards similar goals, and observable but less significant differences among the different model-interpreter combinations applied to the same datasets. Results suggest actionability depends on the entire pipeline, from problem definition and data selection to model choice and explanation method, that it provides a meaningful signal for model selection in valid use cases and merits further research across diverse real-world datasets. The research extends ideas of local and global model explainability to model actionability from the R-user perspective. Actionability evaluations can empower accountable, trustworthy A.I. and provide incentives for serving R-users, not just issuers.https://ieeexplore.ieee.org/document/10286013/Actionabilityrecipient-userartificial intelligenceexplainable AIML models |
spellingShingle | Nwaike Kelechi Licheng Jiao Quantifying Actionability: Evaluating Human-Recipient Models IEEE Access Actionability recipient-user artificial intelligence explainable AI ML models |
title | Quantifying Actionability: Evaluating Human-Recipient Models |
title_full | Quantifying Actionability: Evaluating Human-Recipient Models |
title_fullStr | Quantifying Actionability: Evaluating Human-Recipient Models |
title_full_unstemmed | Quantifying Actionability: Evaluating Human-Recipient Models |
title_short | Quantifying Actionability: Evaluating Human-Recipient Models |
title_sort | quantifying actionability evaluating human recipient models |
topic | Actionability recipient-user artificial intelligence explainable AI ML models |
url | https://ieeexplore.ieee.org/document/10286013/ |
work_keys_str_mv | AT nwaikekelechi quantifyingactionabilityevaluatinghumanrecipientmodels AT lichengjiao quantifyingactionabilityevaluatinghumanrecipientmodels |