Policy Improvement for POMDPs Using Normalized Importance Sampling
We present a new method for estimating the expected return of a POMDP from experience. The estimator does not assume any knowle ge of the POMDP and allows the experience to be gathered with an arbitrary set of policies. The return is estimated for any new policy of the POMDP. We motivate the estimat...
Main Author: | |
---|---|
Language: | en_US |
Published: |
2004
|
Online Access: | http://hdl.handle.net/1721.1/7218 |