Summary: | As machines that act autonomously on behalf of others-e.g., robots-become integral to society, it is critical we understand the impact on human decision-making. Here we show that people readily engage in social categorization distinguishing humans ("us") from machines ("them"), which leads to reduced cooperation with machines. However, we show that a simple cultural cue-the ethnicity of the machine's virtual face-mitigated this bias for participants from two distinct cultures (Japan and United States). We further show that situational cues of affiliative intent-namely, expressions of emotion-overrode expectations of coalition alliances from social categories: When machines were from a different culture, participants showed the usual bias when competitive emotion was shown (e.g., joy following exploitation); in contrast, participants cooperated just as much with humans as machines that expressed cooperative emotion (e.g., joy following cooperation). These findings reveal a path for increasing cooperation in society through autonomous machines.
|