Action understanding as inverse planning

Humans are adept at inferring the mental states underlying other agents’ actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theor...

Full description

Bibliographic Details
Main Authors: Baker, Christopher Lawrence, Saxe, Rebecca R., Tenenbaum, Joshua B.
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Format: Article
Language:en_US
Published: Elsevier 2011
Online Access:http://hdl.handle.net/1721.1/60852
https://orcid.org/0000-0003-2377-1791
https://orcid.org/0000-0002-1925-2035
https://orcid.org/0000-0001-7870-4487
_version_ 1826193496629837824
author Baker, Christopher Lawrence
Saxe, Rebecca R.
Tenenbaum, Joshua B.
author2 Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
author_facet Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Baker, Christopher Lawrence
Saxe, Rebecca R.
Tenenbaum, Joshua B.
author_sort Baker, Christopher Lawrence
collection MIT
description Humans are adept at inferring the mental states underlying other agents’ actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents’ behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent’s behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an “intentional stance” [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a “teleological stance” [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165–193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.
first_indexed 2024-09-23T09:40:06Z
format Article
id mit-1721.1/60852
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T09:40:06Z
publishDate 2011
publisher Elsevier
record_format dspace
spelling mit-1721.1/608522022-09-30T16:02:13Z Action understanding as inverse planning Baker, Christopher Lawrence Saxe, Rebecca R. Tenenbaum, Joshua B. Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences Saxe, Rebecca R. Baker, Christopher Lawrence Saxe, Rebecca R. Tenenbaum, Joshua B. Humans are adept at inferring the mental states underlying other agents’ actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents’ behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent’s behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an “intentional stance” [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a “teleological stance” [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165–193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent. United States. Air Force Office of Scientific Research (AFOSR MURI Contract FA9550-05-1-0321) James S. McDonnell Foundation (Causal Learning Collaborative Initiative) National Science Foundation (U.S.). Graduate Research Fellowship Program 2011-01-28T18:34:39Z 2011-01-28T18:34:39Z 2009-07 2009-06 Article http://purl.org/eprint/type/JournalArticle 0010-0277 http://hdl.handle.net/1721.1/60852 Baker, Chris L., Rebecca Saxe, and Joshua B. Tenenbaum. “Action understanding as inverse planning.” Cognition 113.3 (2009): 329-349. https://orcid.org/0000-0003-2377-1791 https://orcid.org/0000-0002-1925-2035 https://orcid.org/0000-0001-7870-4487 en_US http://dx.doi.org/10.1016/j.cognition.2009.07.005 Cognition Attribution-Noncommercial-Share Alike 3.0 Unported http://creativecommons.org/licenses/by-nc-sa/3.0/ application/pdf Elsevier MIT web domain
spellingShingle Baker, Christopher Lawrence
Saxe, Rebecca R.
Tenenbaum, Joshua B.
Action understanding as inverse planning
title Action understanding as inverse planning
title_full Action understanding as inverse planning
title_fullStr Action understanding as inverse planning
title_full_unstemmed Action understanding as inverse planning
title_short Action understanding as inverse planning
title_sort action understanding as inverse planning
url http://hdl.handle.net/1721.1/60852
https://orcid.org/0000-0003-2377-1791
https://orcid.org/0000-0002-1925-2035
https://orcid.org/0000-0001-7870-4487
work_keys_str_mv AT bakerchristopherlawrence actionunderstandingasinverseplanning
AT saxerebeccar actionunderstandingasinverseplanning
AT tenenbaumjoshuab actionunderstandingasinverseplanning