Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability...
Main Authors: | Suresh, Harini, Gomez, Steven R, Nam, Kevin K, Satyanarayan, Arvind |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
Association for Computing Machinery (ACM)
2022
|
Online Access: | https://hdl.handle.net/1721.1/143861 |
Similar Items
-
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
by: Boggust, Angie, et al.
Published: (2023) -
A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle
by: Suresh, Harini, et al.
Published: (2022) -
A Framework of Potential Sources of Harm Throughout the Machine Learning Life Cycle
by: Suresh, Harini, et al.
Published: (2022) -
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
by: Suresh, Harini, et al.
Published: (2022) -
Context and Participation in Machine Learning
by: Suresh, Harini
Published: (2023)