Self-Adaptive Priority Correction for Prioritized Experience Replay
Deep Reinforcement Learning (DRL) is a promising approach for general artificial intelligence. However, most DRL methods suffer from the problem of data inefficiency. To alleviate this problem, DeepMind proposed Prioritized Experience Replay (PER). Though PER improves data utilization, the prioritie...
Main Authors: | Hongjie Zhang, Cheng Qu, Jindou Zhang, Jing Li |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-10-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/19/6925 |
Similar Items
-
Intelligent Ship Collision Avoidance Algorithm Based on DDQN with Prioritized Experience Replay under COLREGs
by: Pengyu Zhai, et al.
Published: (2022-04-01) -
Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
by: Hongqi Li, et al.
Published: (2025-01-01) -
Prioritized experience replay based on dynamics priority
by: Hu Li, et al.
Published: (2024-03-01) -
Prioritized Experience Replay for Multi-agent Cooperation
by: Zirong HUANG, et al.
Published: (2021-09-01) -
Exploration and Exploitation Balanced Experience Replay
by: ZHANG Jia-neng, LI Hui, WU Hao-lin, WANG Zhuang
Published: (2022-05-01)