Quantifying Actionability: Evaluating Human-Recipient Models
With the increasing use of machine learning and artificial intelligence(ML/AI) to inform decisions, there is a need to evaluate models beyond the traditional metrics, and not just from the perspective of the issuer-user (I-user) commissioning them but also for the recipient-user (R-user) impacted by...
Main Authors: | Nwaike Kelechi, Licheng Jiao |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10286013/ |
Similar Items
-
Toward Building Trust in Machine Learning Models: Quantifying the Explainability by SHAP and References to Human Strategy
by: Zhaopeng Li, et al.
Published: (2024-01-01) -
An Ensemble Approach for the Prediction of Diabetes Mellitus Using a Soft Voting Classifier with an Explainable AI
by: Hafsa Binte Kibria, et al.
Published: (2022-09-01) -
A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine
by: Raquel González-Alday, et al.
Published: (2023-09-01) -
The Cost of Understanding—XAI Algorithms towards Sustainable ML in the View of Computational Cost
by: Claire Jean-Quartier, et al.
Published: (2023-05-01) -
Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare
by: Tim Hulsen
Published: (2023-08-01)