Enhanced DQN Framework for Selecting Actions and Updating Replay Memory Considering Massive Non-Executable Actions
A Deep-Q-Network (DQN) controls a virtual agent as the level of a player using only screenshots as inputs. Replay memory selects a limited number of experience replays according to an arbitrary batch size and updates them using the associated Q-function. Hence, relatively fewer experience replays of...
Main Authors: | Bonwoo Gu, Yunsick Sung |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-11-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/11/23/11162 |
Similar Items
-
Enhanced Reinforcement Learning Method Combining One-Hot Encoding-Based Vectors for CNN-Based Alternative High-Level Decisions
by: Bonwoo Gu, et al.
Published: (2021-02-01) -
THE FUTURE, THE CRISIS, AND THE FUTURE OF REPLAY STORY
by: Eleonora Teresa Imbierowicz
Published: (2021-06-01) -
A model of hippocampal replay driven by experience and environmental structure facilitates spatial learning
by: Nicolas Diekmann, et al.
Published: (2023-03-01) -
Enhanced Off-Policy Reinforcement Learning With Focused Experience Replay
by: Seung-Hyun Kong, et al.
Published: (2021-01-01) -
Experience Replay Using Transition Sequences
by: Thommen George Karimpanal, et al.
Published: (2018-06-01)