Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
Algorithmic trading allows investors to avoid emotional and irrational trading decisions and helps them make profits using modern computer technology. In recent years, reinforcement learning has yielded promising results for algorithmic trading. Two prominent challenges in algorithmic trading with r...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9611246/ |
_version_ | 1830275971259826176 |
---|---|
author | Deog-Yeong Park Ki-Hoon Lee |
author_facet | Deog-Yeong Park Ki-Hoon Lee |
author_sort | Deog-Yeong Park |
collection | DOAJ |
description | Algorithmic trading allows investors to avoid emotional and irrational trading decisions and helps them make profits using modern computer technology. In recent years, reinforcement learning has yielded promising results for algorithmic trading. Two prominent challenges in algorithmic trading with reinforcement learning are (1) extracting robust features and (2) learning a profitable trading policy. Another challenge is that it was previously often assumed that both long and short positions are always possible in stock trading; however, taking a short position is risky or sometimes impossible in practice. We propose a practical algorithmic trading method, <italic>SIRL-Trader</italic>, which achieves good profit using only long positions. SIRL-Trader uses offline/online state representation learning (SRL) and imitative reinforcement learning. In offline SRL, we apply dimensionality reduction and clustering to extract robust features whereas, in online SRL, we co-train a regression model with a reinforcement learning model to provide accurate state information for decision-making. In imitative reinforcement learning, we incorporate a behavior cloning technique with the twin-delayed deep deterministic policy gradient (TD3) algorithm and apply multistep learning and dynamic delay to TD3. The experimental results show that SIRL-Trader yields higher profits and offers superior generalization ability compared with state-of-the-art methods. |
first_indexed | 2024-12-19T00:36:08Z |
format | Article |
id | doaj.art-fec1d81fddfe4844bbcbdf81d0705b41 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-19T00:36:08Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-fec1d81fddfe4844bbcbdf81d0705b412022-12-21T20:44:48ZengIEEEIEEE Access2169-35362021-01-01915231015232110.1109/ACCESS.2021.31272099611246Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement LearningDeog-Yeong Park0Ki-Hoon Lee1https://orcid.org/0000-0003-4661-0982School of Computer and Information Engineering, Kwangwoon University, Nowon-gu, Seoul, Republic of KoreaSchool of Computer and Information Engineering, Kwangwoon University, Nowon-gu, Seoul, Republic of KoreaAlgorithmic trading allows investors to avoid emotional and irrational trading decisions and helps them make profits using modern computer technology. In recent years, reinforcement learning has yielded promising results for algorithmic trading. Two prominent challenges in algorithmic trading with reinforcement learning are (1) extracting robust features and (2) learning a profitable trading policy. Another challenge is that it was previously often assumed that both long and short positions are always possible in stock trading; however, taking a short position is risky or sometimes impossible in practice. We propose a practical algorithmic trading method, <italic>SIRL-Trader</italic>, which achieves good profit using only long positions. SIRL-Trader uses offline/online state representation learning (SRL) and imitative reinforcement learning. In offline SRL, we apply dimensionality reduction and clustering to extract robust features whereas, in online SRL, we co-train a regression model with a reinforcement learning model to provide accurate state information for decision-making. In imitative reinforcement learning, we incorporate a behavior cloning technique with the twin-delayed deep deterministic policy gradient (TD3) algorithm and apply multistep learning and dynamic delay to TD3. The experimental results show that SIRL-Trader yields higher profits and offers superior generalization ability compared with state-of-the-art methods.https://ieeexplore.ieee.org/document/9611246/Algorithmic tradingdeep learningstate representation learningimitation learningreinforcement learning |
spellingShingle | Deog-Yeong Park Ki-Hoon Lee Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning IEEE Access Algorithmic trading deep learning state representation learning imitation learning reinforcement learning |
title | Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning |
title_full | Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning |
title_fullStr | Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning |
title_full_unstemmed | Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning |
title_short | Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning |
title_sort | practical algorithmic trading using state representation learning and imitative reinforcement learning |
topic | Algorithmic trading deep learning state representation learning imitation learning reinforcement learning |
url | https://ieeexplore.ieee.org/document/9611246/ |
work_keys_str_mv | AT deogyeongpark practicalalgorithmictradingusingstaterepresentationlearningandimitativereinforcementlearning AT kihoonlee practicalalgorithmictradingusingstaterepresentationlearningandimitativereinforcementlearning |