Average-reward off-policy policy evaluation with function approximation

We consider off-policy policy evaluation with function approximation (FA) in average-reward MDPs, where the goal is to estimate both the reward rate and the differential value function. For this problem, bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly tri...

Full description

Bibliographic Details
Main Authors: Zhang, S, Wan, Y, Sutton, RS, Whiteson, S
Format: Conference item
Language:English
Published: PMLR 2021
_version_ 1797104417755889664
author Zhang, S
Wan, Y
Sutton, RS
Whiteson, S
author_facet Zhang, S
Wan, Y
Sutton, RS
Whiteson, S
author_sort Zhang, S
collection OXFORD
description We consider off-policy policy evaluation with function approximation (FA) in average-reward MDPs, where the goal is to estimate both the reward rate and the differential value function. For this problem, bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad (Sutton & Barto, 2018). To address the deadly triad, we propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting. In terms of estimating the differential value function, the algorithms are the first convergent off-policy linear function approximation algorithms. In terms of estimating the reward rate, the algorithms are the first convergent off-policy linear function approximation algorithms that do not require estimating the density ratio. We demonstrate empirically the advantage of the proposed algorithms, as well as their nonlinear variants, over a competitive density-ratio-based approach, in a simple domain as well as challenging robot simulation tasks.
first_indexed 2024-03-07T06:33:29Z
format Conference item
id oxford-uuid:f6d2ce8b-9f43-45c9-97a8-8474c3a35190
institution University of Oxford
language English
last_indexed 2024-03-07T06:33:29Z
publishDate 2021
publisher PMLR
record_format dspace
spelling oxford-uuid:f6d2ce8b-9f43-45c9-97a8-8474c3a351902022-03-27T12:37:56ZAverage-reward off-policy policy evaluation with function approximationConference itemhttp://purl.org/coar/resource_type/c_5794uuid:f6d2ce8b-9f43-45c9-97a8-8474c3a35190EnglishSymplectic ElementsPMLR2021Zhang, SWan, YSutton, RSWhiteson, SWe consider off-policy policy evaluation with function approximation (FA) in average-reward MDPs, where the goal is to estimate both the reward rate and the differential value function. For this problem, bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad (Sutton & Barto, 2018). To address the deadly triad, we propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting. In terms of estimating the differential value function, the algorithms are the first convergent off-policy linear function approximation algorithms. In terms of estimating the reward rate, the algorithms are the first convergent off-policy linear function approximation algorithms that do not require estimating the density ratio. We demonstrate empirically the advantage of the proposed algorithms, as well as their nonlinear variants, over a competitive density-ratio-based approach, in a simple domain as well as challenging robot simulation tasks.
spellingShingle Zhang, S
Wan, Y
Sutton, RS
Whiteson, S
Average-reward off-policy policy evaluation with function approximation
title Average-reward off-policy policy evaluation with function approximation
title_full Average-reward off-policy policy evaluation with function approximation
title_fullStr Average-reward off-policy policy evaluation with function approximation
title_full_unstemmed Average-reward off-policy policy evaluation with function approximation
title_short Average-reward off-policy policy evaluation with function approximation
title_sort average reward off policy policy evaluation with function approximation
work_keys_str_mv AT zhangs averagerewardoffpolicypolicyevaluationwithfunctionapproximation
AT wany averagerewardoffpolicypolicyevaluationwithfunctionapproximation
AT suttonrs averagerewardoffpolicypolicyevaluationwithfunctionapproximation
AT whitesons averagerewardoffpolicypolicyevaluationwithfunctionapproximation