Breaking the deadly triad with a target network

The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for...

Ausführliche Beschreibung

Bibliographische Detailangaben
Hauptverfasser: Zhang, S, Yao, H, Whiteson, S
Format: Conference item
Sprache:English
Veröffentlicht: PMLR 2021
_version_ 1826277639089815552
author Zhang, S
Yao, H
Whiteson, S
author_facet Zhang, S
Yao, H
Whiteson, S
author_sort Zhang, S
collection OXFORD
description The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear Q-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.
first_indexed 2024-03-06T23:31:58Z
format Conference item
id oxford-uuid:6c54909f-a7db-43be-b191-cd97bb8a1fc8
institution University of Oxford
language English
last_indexed 2024-03-06T23:31:58Z
publishDate 2021
publisher PMLR
record_format dspace
spelling oxford-uuid:6c54909f-a7db-43be-b191-cd97bb8a1fc82022-03-26T19:10:14ZBreaking the deadly triad with a target networkConference itemhttp://purl.org/coar/resource_type/c_5794uuid:6c54909f-a7db-43be-b191-cd97bb8a1fc8EnglishSymplectic ElementsPMLR2021Zhang, SYao, HWhiteson, SThe deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear Q-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.
spellingShingle Zhang, S
Yao, H
Whiteson, S
Breaking the deadly triad with a target network
title Breaking the deadly triad with a target network
title_full Breaking the deadly triad with a target network
title_fullStr Breaking the deadly triad with a target network
title_full_unstemmed Breaking the deadly triad with a target network
title_short Breaking the deadly triad with a target network
title_sort breaking the deadly triad with a target network
work_keys_str_mv AT zhangs breakingthedeadlytriadwithatargetnetwork
AT yaoh breakingthedeadlytriadwithatargetnetwork
AT whitesons breakingthedeadlytriadwithatargetnetwork