Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning
There are several automated stock trading programs using reinforcement learning, one of which is an ensemble strategy. The main idea of the ensemble strategy is to train DRL agents and make an ensemble with three different actor–critic algorithms: Advantage Actor–Critic (A2C), Deep Deterministic Pol...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/13/1/633 |
_version_ | 1797626191531737088 |
---|---|
author | Minseok Kong Jungmin So |
author_facet | Minseok Kong Jungmin So |
author_sort | Minseok Kong |
collection | DOAJ |
description | There are several automated stock trading programs using reinforcement learning, one of which is an ensemble strategy. The main idea of the ensemble strategy is to train DRL agents and make an ensemble with three different actor–critic algorithms: Advantage Actor–Critic (A2C), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). This novel idea was the concept mainly used in this paper. However, we did not stop there, but we refined the automated stock trading in two areas. First, we made another DRL-based ensemble and employed it as a new trading agent. We named it Remake Ensemble, and it combines not only A2C, DDPG, and PPO but also Actor–Critic using Kronecker-Factored Trust Region (ACKTR), Soft Actor–Critic (SAC), Twin Delayed DDPG (TD3), and Trust Region Policy Optimization (TRPO). Furthermore, we expanded the application domain of automated stock trading. Although the existing stock trading method treats only 30 Dow Jones stocks, ours handles KOSPI stocks, JPX stocks, and Dow Jones stocks. We conducted experiments with our modified automated stock trading system to validate its robustness in terms of cumulative return. Finally, we suggested some methods to gain relatively stable profits following the experiments. |
first_indexed | 2024-03-11T10:06:59Z |
format | Article |
id | doaj.art-93127ba0ab3c465880183251ddce7862 |
institution | Directory Open Access Journal |
issn | 2076-3417 |
language | English |
last_indexed | 2024-03-11T10:06:59Z |
publishDate | 2023-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj.art-93127ba0ab3c465880183251ddce78622023-11-16T14:59:40ZengMDPI AGApplied Sciences2076-34172023-01-0113163310.3390/app13010633Empirical Analysis of Automated Stock Trading Using Deep Reinforcement LearningMinseok Kong0Jungmin So1Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of KoreaDepartment of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of KoreaThere are several automated stock trading programs using reinforcement learning, one of which is an ensemble strategy. The main idea of the ensemble strategy is to train DRL agents and make an ensemble with three different actor–critic algorithms: Advantage Actor–Critic (A2C), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). This novel idea was the concept mainly used in this paper. However, we did not stop there, but we refined the automated stock trading in two areas. First, we made another DRL-based ensemble and employed it as a new trading agent. We named it Remake Ensemble, and it combines not only A2C, DDPG, and PPO but also Actor–Critic using Kronecker-Factored Trust Region (ACKTR), Soft Actor–Critic (SAC), Twin Delayed DDPG (TD3), and Trust Region Policy Optimization (TRPO). Furthermore, we expanded the application domain of automated stock trading. Although the existing stock trading method treats only 30 Dow Jones stocks, ours handles KOSPI stocks, JPX stocks, and Dow Jones stocks. We conducted experiments with our modified automated stock trading system to validate its robustness in terms of cumulative return. Finally, we suggested some methods to gain relatively stable profits following the experiments.https://www.mdpi.com/2076-3417/13/1/633empirical analysisautomated stock tradingdeep reinforcement learningpolicy gradient methodactor–critic algorithmsensemble strategy |
spellingShingle | Minseok Kong Jungmin So Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning Applied Sciences empirical analysis automated stock trading deep reinforcement learning policy gradient method actor–critic algorithms ensemble strategy |
title | Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning |
title_full | Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning |
title_fullStr | Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning |
title_full_unstemmed | Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning |
title_short | Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning |
title_sort | empirical analysis of automated stock trading using deep reinforcement learning |
topic | empirical analysis automated stock trading deep reinforcement learning policy gradient method actor–critic algorithms ensemble strategy |
url | https://www.mdpi.com/2076-3417/13/1/633 |
work_keys_str_mv | AT minseokkong empiricalanalysisofautomatedstocktradingusingdeepreinforcementlearning AT jungminso empiricalanalysisofautomatedstocktradingusingdeepreinforcementlearning |