Fresher Experience Plays a More Important Role in Prioritized Experience Replay
Prioritized experience replay (PER) is an important technique in deep reinforcement learning (DRL). It improves the sampling efficiency of data in various DRL algorithms and achieves great performance. PER uses temporal difference error (TD-error) to measure the value of experiences and adjusts the...
Main Authors: | Jue Ma, Dejun Ning, Chengyi Zhang, Shipeng Liu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-12-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/12/23/12489 |
Similar Items
-
Prioritized experience replay in path planning via multi-dimensional transition priority fusion
by: Nuo Cheng, et al.
Published: (2023-11-01) -
Prioritized Experience Replay for Multi-agent Cooperation
by: Zirong HUANG, et al.
Published: (2021-09-01) -
Deep Deterministic Policy Gradient with Episode Experience Replay
by: ZHANG Jian-hang, LIU Quan
Published: (2021-10-01) -
UAV Path Planning Based on the Average TD3 Algorithm With Prioritized Experience Replay
by: Xuqiong Luo, et al.
Published: (2024-01-01) -
Self-Adaptive Priority Correction for Prioritized Experience Replay
by: Hongjie Zhang, et al.
Published: (2020-10-01)