Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance

How is algorithmic model interpretability related to human acceptance of algorithmic recommendations and performance on decision-making tasks? We explored these questions in a multi-method field study of a large multinational fashion organization. We first conducted a quantitative field experiment t...

Full description

Bibliographic Details
Main Authors: DeStefano, Timothy, Kellogg, Katherine C., Menietti, Michael, Vendraminelli, Luca
Format: Working Paper
Language:en_US
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/1721.1/145813
_version_ 1826205542853378048
author DeStefano, Timothy
Kellogg, Katherine C.
Menietti, Michael
Vendraminelli, Luca
author_facet DeStefano, Timothy
Kellogg, Katherine C.
Menietti, Michael
Vendraminelli, Luca
author_sort DeStefano, Timothy
collection MIT
description How is algorithmic model interpretability related to human acceptance of algorithmic recommendations and performance on decision-making tasks? We explored these questions in a multi-method field study of a large multinational fashion organization. We first conducted a quantitative field experiment to compare the use of two models—an interpretable versus an uninterpretable algorithmic model— designed to assist employees with decision making around how many products to send to each of its stores. Contrary to what the literature on interpretable algorithms would lead us to expect, under conditions of high perceived uncertainty, decision makers’ use of an uninterpretable algorithmic model was associated with higher acceptance of algorithmic recommendations and higher task performance than was their use of an interpretable algorithmic model with a similar level of performance. We next investigated this puzzling result using 31 interviews with 14 employees—2 algorithm developers, 2 managers, and 10 decision makers. We advance two concepts that suggest a refinement of theory on interpretable algorithms. First, overconfident troubleshooting—a decision maker rejecting a recommendation coming from an interpretable algorithm, because of their belief that they understand the inner workings of complex processes better than they actually do. Second, social proofing the algorithm—including respected peers in the algorithm development and testing process—may make it more likely that decision makers accept recommendations coming from an uninterpretable algorithm in situations characterized by high perceived uncertainty, because the decision makers may seek to reduce their uncertainty by incorporating the opinions of people with their own knowledge base and experience.
first_indexed 2024-09-23T13:14:49Z
format Working Paper
id mit-1721.1/145813
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:14:49Z
publishDate 2022
record_format dspace
spelling mit-1721.1/1458132022-10-13T03:22:03Z Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance DeStefano, Timothy Kellogg, Katherine C. Menietti, Michael Vendraminelli, Luca Interpretable AI, Artificial Intelligence, Machine Learning, Algorithm Aversion, AI Adoption, Firm Productivity, AI and Strategy, Human-in-the-loop Decision Making How is algorithmic model interpretability related to human acceptance of algorithmic recommendations and performance on decision-making tasks? We explored these questions in a multi-method field study of a large multinational fashion organization. We first conducted a quantitative field experiment to compare the use of two models—an interpretable versus an uninterpretable algorithmic model— designed to assist employees with decision making around how many products to send to each of its stores. Contrary to what the literature on interpretable algorithms would lead us to expect, under conditions of high perceived uncertainty, decision makers’ use of an uninterpretable algorithmic model was associated with higher acceptance of algorithmic recommendations and higher task performance than was their use of an interpretable algorithmic model with a similar level of performance. We next investigated this puzzling result using 31 interviews with 14 employees—2 algorithm developers, 2 managers, and 10 decision makers. We advance two concepts that suggest a refinement of theory on interpretable algorithms. First, overconfident troubleshooting—a decision maker rejecting a recommendation coming from an interpretable algorithm, because of their belief that they understand the inner workings of complex processes better than they actually do. Second, social proofing the algorithm—including respected peers in the algorithm development and testing process—may make it more likely that decision makers accept recommendations coming from an uninterpretable algorithm in situations characterized by high perceived uncertainty, because the decision makers may seek to reduce their uncertainty by incorporating the opinions of people with their own knowledge base and experience. 2022-10-12T21:07:46Z 2022-10-12T21:07:46Z 2022-10-12 Working Paper https://hdl.handle.net/1721.1/145813 en_US MIT Sloan School of Management Working Paper;6797-22 Attribution-NonCommercial-NoDerivs 3.0 United States http://creativecommons.org/licenses/by-nc-nd/3.0/us/ application/pdf
spellingShingle Interpretable AI, Artificial Intelligence, Machine Learning, Algorithm Aversion, AI Adoption, Firm Productivity, AI and Strategy, Human-in-the-loop Decision Making
DeStefano, Timothy
Kellogg, Katherine C.
Menietti, Michael
Vendraminelli, Luca
Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance
title Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance
title_full Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance
title_fullStr Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance
title_full_unstemmed Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance
title_short Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance
title_sort why providing humans with interpretable algorithms may counterintuitively lead to lower decision making performance
topic Interpretable AI, Artificial Intelligence, Machine Learning, Algorithm Aversion, AI Adoption, Firm Productivity, AI and Strategy, Human-in-the-loop Decision Making
url https://hdl.handle.net/1721.1/145813
work_keys_str_mv AT destefanotimothy whyprovidinghumanswithinterpretablealgorithmsmaycounterintuitivelyleadtolowerdecisionmakingperformance
AT kelloggkatherinec whyprovidinghumanswithinterpretablealgorithmsmaycounterintuitivelyleadtolowerdecisionmakingperformance
AT meniettimichael whyprovidinghumanswithinterpretablealgorithmsmaycounterintuitivelyleadtolowerdecisionmakingperformance
AT vendraminelliluca whyprovidinghumanswithinterpretablealgorithmsmaycounterintuitivelyleadtolowerdecisionmakingperformance