An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward

We derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterised in terms of a flexible mixture of Gaussians. This approach exploits both analytical tractability and numerical optimization. Consequentl...

Popoln opis

Bibliografske podrobnosti
Main Authors: Hoffman, M, de Freitas, N, Doucet, A, Peters, J
Format: Journal article
Izdano: 2009
_version_ 1826301104938287104
author Hoffman, M
de Freitas, N
Doucet, A
Peters, J
author_facet Hoffman, M
de Freitas, N
Doucet, A
Peters, J
author_sort Hoffman, M
collection OXFORD
description We derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterised in terms of a flexible mixture of Gaussians. This approach exploits both analytical tractability and numerical optimization. Consequently, on the one hand, it is more flexible and general than closed-form solutions, such as the widely used linear quadratic Gaussian (LQG) controllers. On the other hand, it is more accurate and faster than optimization methods that rely on approximation and simulation. Partial analytical solutions (though costly) eliminate the need for simulation and, hence, avoid approximation error. The experiments will show that for the same cost of computation, policy optimization methods that rely on analytical tractability have higher value than the ones that rely on simulation.
first_indexed 2024-03-07T05:27:20Z
format Journal article
id oxford-uuid:e103d830-03c7-47e6-bcd7-141be2fe2396
institution University of Oxford
last_indexed 2024-03-07T05:27:20Z
publishDate 2009
record_format dspace
spelling oxford-uuid:e103d830-03c7-47e6-bcd7-141be2fe23962022-03-27T09:51:25ZAn Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary RewardJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:e103d830-03c7-47e6-bcd7-141be2fe2396Department of Computer Science2009Hoffman, Mde Freitas, NDoucet, APeters, JWe derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterised in terms of a flexible mixture of Gaussians. This approach exploits both analytical tractability and numerical optimization. Consequently, on the one hand, it is more flexible and general than closed-form solutions, such as the widely used linear quadratic Gaussian (LQG) controllers. On the other hand, it is more accurate and faster than optimization methods that rely on approximation and simulation. Partial analytical solutions (though costly) eliminate the need for simulation and, hence, avoid approximation error. The experiments will show that for the same cost of computation, policy optimization methods that rely on analytical tractability have higher value than the ones that rely on simulation.
spellingShingle Hoffman, M
de Freitas, N
Doucet, A
Peters, J
An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
title An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
title_full An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
title_fullStr An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
title_full_unstemmed An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
title_short An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward
title_sort expectation maximization algorithm for continuous markov decision processes with arbitrary reward
work_keys_str_mv AT hoffmanm anexpectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT defreitasn anexpectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT douceta anexpectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT petersj anexpectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT hoffmanm expectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT defreitasn expectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT douceta expectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward
AT petersj expectationmaximizationalgorithmforcontinuousmarkovdecisionprocesseswitharbitraryreward