Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence
For many AI systems, it is hard to interpret how they make decisions. Here, the authors show that non-experts value interpretability in AI, especially for decisions involving high stakes and scarce resources, but they sacrifice AI interpretability when it trades off against AI accuracy.
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2022-10-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-022-33417-3 |
_version_ | 1797996133074599936 |
---|---|
author | Anne-Marie Nussberger Lan Luo L. Elisa Celis M. J. Crockett |
author_facet | Anne-Marie Nussberger Lan Luo L. Elisa Celis M. J. Crockett |
author_sort | Anne-Marie Nussberger |
collection | DOAJ |
description | For many AI systems, it is hard to interpret how they make decisions. Here, the authors show that non-experts value interpretability in AI, especially for decisions involving high stakes and scarce resources, but they sacrifice AI interpretability when it trades off against AI accuracy. |
first_indexed | 2024-04-11T10:11:34Z |
format | Article |
id | doaj.art-3704c682023541bb9bdb7b734852d06b |
institution | Directory Open Access Journal |
issn | 2041-1723 |
language | English |
last_indexed | 2024-04-11T10:11:34Z |
publishDate | 2022-10-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj.art-3704c682023541bb9bdb7b734852d06b2022-12-22T04:30:04ZengNature PortfolioNature Communications2041-17232022-10-0113111310.1038/s41467-022-33417-3Public attitudes value interpretability but prioritize accuracy in Artificial IntelligenceAnne-Marie Nussberger0Lan Luo1L. Elisa Celis2M. J. Crockett3Center for Humans and Machines, Max Planck Institute for Human DevelopmentDepartment of Marketing, Columbia Business SchoolDepartment of Statistics and Data Science, Yale UniversityDepartment of Psychology and University Center for Human Values, Princeton UniversityFor many AI systems, it is hard to interpret how they make decisions. Here, the authors show that non-experts value interpretability in AI, especially for decisions involving high stakes and scarce resources, but they sacrifice AI interpretability when it trades off against AI accuracy.https://doi.org/10.1038/s41467-022-33417-3 |
spellingShingle | Anne-Marie Nussberger Lan Luo L. Elisa Celis M. J. Crockett Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence Nature Communications |
title | Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence |
title_full | Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence |
title_fullStr | Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence |
title_full_unstemmed | Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence |
title_short | Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence |
title_sort | public attitudes value interpretability but prioritize accuracy in artificial intelligence |
url | https://doi.org/10.1038/s41467-022-33417-3 |
work_keys_str_mv | AT annemarienussberger publicattitudesvalueinterpretabilitybutprioritizeaccuracyinartificialintelligence AT lanluo publicattitudesvalueinterpretabilitybutprioritizeaccuracyinartificialintelligence AT lelisacelis publicattitudesvalueinterpretabilitybutprioritizeaccuracyinartificialintelligence AT mjcrockett publicattitudesvalueinterpretabilitybutprioritizeaccuracyinartificialintelligence |