Cooperating with machines

Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human-machine cooperation is beneficial but non-trivial, such as sc...

Full description

Bibliographic Details
Main Authors: Crandall, Jacob W., Oudah, Mayada, Tennom, Mayada, Ishowo-Oloko, Fatimah, Abdallah, Sherief, Bonnefon, Jean-François, Cebrian, Manuel, Shariff, Azim, Goodrich, Michael A., Rahwan, Iyad
Other Authors: Massachusetts Institute of Technology. Media Laboratory
Format: Article
Published: Nature Publishing Group 2018
Online Access:http://hdl.handle.net/1721.1/115259
Description
Summary:Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human-machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human-machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.