Learning Generative State Space Models for Active Inference

In this paper we investigate the active inference framework as a means to enable autonomous behavior in artificial agents. Active inference is a theoretical framework underpinning the way organisms act and observe in the real world. In active inference, agents act in order to minimize their so calle...

Full description

Bibliographic Details
Main Authors: Ozan Çatal, Samuel Wauthier, Cedric De Boom, Tim Verbelen, Bart Dhoedt
Format: Article
Language:English
Published: Frontiers Media S.A. 2020-11-01
Series:Frontiers in Computational Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fncom.2020.574372/full
_version_ 1811296731685453824
author Ozan Çatal
Samuel Wauthier
Cedric De Boom
Tim Verbelen
Bart Dhoedt
author_facet Ozan Çatal
Samuel Wauthier
Cedric De Boom
Tim Verbelen
Bart Dhoedt
author_sort Ozan Çatal
collection DOAJ
description In this paper we investigate the active inference framework as a means to enable autonomous behavior in artificial agents. Active inference is a theoretical framework underpinning the way organisms act and observe in the real world. In active inference, agents act in order to minimize their so called free energy, or prediction error. Besides being biologically plausible, active inference has been shown to solve hard exploration problems in various simulated environments. However, these simulations typically require handcrafting a generative model for the agent. Therefore we propose to use recent advances in deep artificial neural networks to learn generative state space models from scratch, using only observation-action sequences. This way we are able to scale active inference to new and challenging problem domains, whilst still building on the theoretical backing of the free energy principle. We validate our approach on the mountain car problem to illustrate that our learnt models can indeed trade-off instrumental value and ambiguity. Furthermore, we show that generative models can also be learnt using high-dimensional pixel observations, both in the OpenAI Gym car racing environment and a real-world robotic navigation task. Finally we show that active inference based policies are an order of magnitude more sample efficient than Deep Q Networks on RL tasks.
first_indexed 2024-04-13T05:53:43Z
format Article
id doaj.art-104dbf9bfcc345ad8d3597bf7f00f6c2
institution Directory Open Access Journal
issn 1662-5188
language English
last_indexed 2024-04-13T05:53:43Z
publishDate 2020-11-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computational Neuroscience
spelling doaj.art-104dbf9bfcc345ad8d3597bf7f00f6c22022-12-22T02:59:41ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882020-11-011410.3389/fncom.2020.574372574372Learning Generative State Space Models for Active InferenceOzan ÇatalSamuel WauthierCedric De BoomTim VerbelenBart DhoedtIn this paper we investigate the active inference framework as a means to enable autonomous behavior in artificial agents. Active inference is a theoretical framework underpinning the way organisms act and observe in the real world. In active inference, agents act in order to minimize their so called free energy, or prediction error. Besides being biologically plausible, active inference has been shown to solve hard exploration problems in various simulated environments. However, these simulations typically require handcrafting a generative model for the agent. Therefore we propose to use recent advances in deep artificial neural networks to learn generative state space models from scratch, using only observation-action sequences. This way we are able to scale active inference to new and challenging problem domains, whilst still building on the theoretical backing of the free energy principle. We validate our approach on the mountain car problem to illustrate that our learnt models can indeed trade-off instrumental value and ambiguity. Furthermore, we show that generative models can also be learnt using high-dimensional pixel observations, both in the OpenAI Gym car racing environment and a real-world robotic navigation task. Finally we show that active inference based policies are an order of magnitude more sample efficient than Deep Q Networks on RL tasks.https://www.frontiersin.org/articles/10.3389/fncom.2020.574372/fullactive inferencefree energydeep learninggenerative modelingrobotics
spellingShingle Ozan Çatal
Samuel Wauthier
Cedric De Boom
Tim Verbelen
Bart Dhoedt
Learning Generative State Space Models for Active Inference
Frontiers in Computational Neuroscience
active inference
free energy
deep learning
generative modeling
robotics
title Learning Generative State Space Models for Active Inference
title_full Learning Generative State Space Models for Active Inference
title_fullStr Learning Generative State Space Models for Active Inference
title_full_unstemmed Learning Generative State Space Models for Active Inference
title_short Learning Generative State Space Models for Active Inference
title_sort learning generative state space models for active inference
topic active inference
free energy
deep learning
generative modeling
robotics
url https://www.frontiersin.org/articles/10.3389/fncom.2020.574372/full
work_keys_str_mv AT ozancatal learninggenerativestatespacemodelsforactiveinference
AT samuelwauthier learninggenerativestatespacemodelsforactiveinference
AT cedricdeboom learninggenerativestatespacemodelsforactiveinference
AT timverbelen learninggenerativestatespacemodelsforactiveinference
AT bartdhoedt learninggenerativestatespacemodelsforactiveinference