Truncated emphatic temporal difference methods for prediction and control

Emphatic Temporal Difference (TD) methods are a class of off-policy Reinforcement Learning (RL) methods involving the use of followon traces. Despite the theoretical success of emphatic TD methods in addressing the notorious deadly triad of off-policy RL, there are still two open problems. First, fo...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Zhang, S, Whiteson, S
Định dạng: Journal article
Ngôn ngữ:English
Được phát hành: Journal of Machine Learning Research 2022