Solving uncertain MDPs with objectives that are separable over instantiations of model uncertainty
Markov Decision Problems, MDPs offer an effective mechanism for planning under uncertainty. However, due to unavoidable uncertainty over models, it is difficult to obtain an exact specification of an MDP. We are interested in solving MDPs, where transition and reward functions are not exactly specif...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
AAAI Press
2018
|
Online Access: | http://hdl.handle.net/1721.1/116234 https://orcid.org/0000-0002-8585-6566 |