Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability...
主要な著者: | Suresh, Harini, Gomez, Steven R, Nam, Kevin K, Satyanarayan, Arvind |
---|---|
その他の著者: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
フォーマット: | 論文 |
言語: | English |
出版事項: |
Association for Computing Machinery (ACM)
2022
|
オンライン・アクセス: | https://hdl.handle.net/1721.1/143861 |
類似資料
-
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
著者:: Boggust, Angie, 等
出版事項: (2023) -
A Framework of Potential Sources of Harm Throughout the Machine Learning Life Cycle
著者:: Suresh, Harini, 等
出版事項: (2022) -
A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle
著者:: Suresh, Harini, 等
出版事項: (2022) -
Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
著者:: Suresh, Harini, 等
出版事項: (2022) -
Need for expertise based randomised controlled trials.
著者:: Devereaux, P, 等
出版事項: (2005)