Summary: | This work addresses an under-explored aspect of people's utilization of algorithmic decision support systems: How do people perceive and use these systems under social influence? Through a pre-registered randomized human-subject experiment, I study the effect of two forms of social information-direct conversations and summarized peer decisions----on users' reliance and effectiveness in leveraging algorithmic advice across a series of decision-making tasks, and how t he availability of local model explanations and performance feedback moderates this effect. I find t hat, on average, neither form of social information affects t rust directly, yet they both moderate t he extent to which feedback and model explanations influence trust in the algorithm. However, while social information can influence trust in the algorithm, I detect no effect on how effectively people utilize algorithmic advice. By describing this interplay between social information, algorithmic transparency, and user behavior, this work contributes to recent research on collective intelligence and sociotechnical approaches to human-AI interaction.
|