On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference

Previous work on planning as active inference addresses finite horizon problems and solutions valid for online planning. We propose solving the general Stochastic Shortest-Path Markov Decision Process (SSP MDP) as probabilistic inference. Furthermore, we discuss online and offline methods for planni...

Full description

Bibliographic Details
Main Authors: Baioumy, M, Lacerda, B, Duckworth, P, Hawes, N
Format: Conference item
Language:English
Published: Springer 2022
_version_ 1797109039725805568
author Baioumy, M
Lacerda, B
Duckworth, P
Hawes, N
author_facet Baioumy, M
Lacerda, B
Duckworth, P
Hawes, N
author_sort Baioumy, M
collection OXFORD
description Previous work on planning as active inference addresses finite horizon problems and solutions valid for online planning. We propose solving the general Stochastic Shortest-Path Markov Decision Process (SSP MDP) as probabilistic inference. Furthermore, we discuss online and offline methods for planning under uncertainty. In an SSP MDP, the horizon is indefinite and unknown a priori. SSP MDPs generalize finite and infinite horizon MDPs and are widely used in the artificial intelligence community. Additionally, we highlight some of the differences between solving an MDP using dynamic programming approaches widely used in the artificial intelligence community and approaches used in the active inference community. F
first_indexed 2024-03-07T07:36:29Z
format Conference item
id oxford-uuid:70e31dd9-0431-435e-a90b-6619000a4160
institution University of Oxford
language English
last_indexed 2024-03-07T07:36:29Z
publishDate 2022
publisher Springer
record_format dspace
spelling oxford-uuid:70e31dd9-0431-435e-a90b-6619000a41602023-03-14T12:05:16ZOn solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inferenceConference itemhttp://purl.org/coar/resource_type/c_5794uuid:70e31dd9-0431-435e-a90b-6619000a4160EnglishSymplectic ElementsSpringer2022Baioumy, MLacerda, BDuckworth, PHawes, NPrevious work on planning as active inference addresses finite horizon problems and solutions valid for online planning. We propose solving the general Stochastic Shortest-Path Markov Decision Process (SSP MDP) as probabilistic inference. Furthermore, we discuss online and offline methods for planning under uncertainty. In an SSP MDP, the horizon is indefinite and unknown a priori. SSP MDPs generalize finite and infinite horizon MDPs and are widely used in the artificial intelligence community. Additionally, we highlight some of the differences between solving an MDP using dynamic programming approaches widely used in the artificial intelligence community and approaches used in the active inference community. F
spellingShingle Baioumy, M
Lacerda, B
Duckworth, P
Hawes, N
On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
title On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
title_full On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
title_fullStr On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
title_full_unstemmed On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
title_short On solving a Stochastic Shortest-Path Markov Decision Process as probabilistic inference
title_sort on solving a stochastic shortest path markov decision process as probabilistic inference
work_keys_str_mv AT baioumym onsolvingastochasticshortestpathmarkovdecisionprocessasprobabilisticinference
AT lacerdab onsolvingastochasticshortestpathmarkovdecisionprocessasprobabilisticinference
AT duckworthp onsolvingastochasticshortestpathmarkovdecisionprocessasprobabilisticinference
AT hawesn onsolvingastochasticshortestpathmarkovdecisionprocessasprobabilisticinference