Cognition xxx (2009) xxx–xxx
ARTICLE IN PRESS
Contents lists available at ScienceDirect
Cognition
journal homepage: www.elsevier.com/locate/COGNIT
Action understanding as inverse planning
Chris L. Baker *, Rebecca Saxe, Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
a r t i c l e
i n f o
Article history:
Received 23 October 2007
Revised 25 June 2009
Accepted 5 July 2009
Available online xxxx
Keywords:
Action understanding
Goal inference
Theory of mind
Inverse reinforcement learning
Bayesian models
0010-0277/$ - see front matter 2009 Elsevier B.V
doi:10.1016/j.cognition.2009.07.005
* Corresponding author. +1 (617) 324 2895.
E-mail address: clbaker@mit.edu (C.L. Baker).
Please cite this article in press as: Baker,
j.cognition.2009.07.005
a b s t r a c t
Humans are adept at inferring the mental states underlying other agents’ actions, such as
goals, beliefs, desires, emotions and other thoughts. We propose a computational frame-
work based on Bayesian inverse planning for modeling human action understanding. The
framework represents an intuitive theory of intentional agents’ behavior based on the prin-
ciple of rationality: the expectation that agents will plan approximately rationally to
achieve their goals, given their beliefs about the world. The mental states that caused an
agent’s behavior are inferred by inverting this model of rational planning using Bayesian
inference, integrating the likelihood of the observed actions with the prior over mental
states. This approach formalizes in precise probabilistic terms the essence of previous qual-
itative approaches to action understanding based on an ‘‘intentional stance” [Dennett, D. C.
(1987). The intentional stance. Cambridge, MA: MIT Press] or a ‘‘teleological stance” [Gerg-
ely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months
of age. Cognition, 56, 165–193]. In three psychophysical experiments using animated stim-
uli of agent