Human learning in Atari
Atari games are an excellent testbed for studying intelligent behavior, as they offer a range of tasks that differ widely in their visual representation, game dynamics, and goals presented to an agent. The last two years have seen a spate of research into artificial agents that use a single algorith...
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Published: |
Association for the Advancement of Artificial Intelligence
2017
|
Online Access: | http://hdl.handle.net/1721.1/112620 https://orcid.org/0000-0002-0138-163X https://orcid.org/0000-0002-1925-2035 |
Summary: | Atari games are an excellent testbed for studying intelligent behavior, as they offer a range of tasks that differ widely in their visual representation, game dynamics, and goals presented to an agent. The last two years have seen a spate of research into artificial agents that use a single algorithm to learn to play these games. The best of these artificial agents perform at better-than-human levels on most games, but require hundreds of hours of game-play experience to produce such behavior. Humans, on the other hand, can learn to perform well on these tasks in a matter of minutes. In this paper we present data on human learning trajectories for several Atari games, and test several hypotheses about the mechanisms that lead to such rapid learning. |
---|