Deep Transformer-Based Asset Price and Direction Prediction

The field of algorithmic trading, driven by deep learning methodologies, has garnered substantial attention in recent times. Within this domain, transformers, convolutional neural networks, and patch embedding-based techniques have emerged as popular choices within the computer vision community. Her...

Full description

Bibliographic Details
Main Authors: Abdul Haluk Batur Gezici, Emre Sefer
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10414094/
Description
Summary:The field of algorithmic trading, driven by deep learning methodologies, has garnered substantial attention in recent times. Within this domain, transformers, convolutional neural networks, and patch embedding-based techniques have emerged as popular choices within the computer vision community. Here, inspired by the latest cutting-edge computer vision methodologies and the existing work showing the capability of image-like conversion for time-series datasets, we apply more advanced transformer-based and patch-based approaches for predicting asset prices and directional price movements. The employed transformer models include Vision Transformer (ViT), Data Efficient Image Transformers (DeiT), and Swin. We use ConvMixer for a patch embedding-based convolutional neural network architecture without a transformer. Our tested transformer-based and patch-based methodologies aim to predict asset prices and directional movements using historical price data by leveraging the inherent image-like properties within the historical time-series dataset. Before the implementation of attention-based architectures, the historical time series price dataset is transformed into two-dimensional images. This transformation is facilitated through the incorporation of various common technical financial indicators, each contributing to the data for a fixed number of consecutive days. Consequently, a diverse set of two-dimensional images is constructed, reflecting various dimensions of the dataset. Subsequently, the original images depicting market valleys and peaks are annotated with labels such as Hold, Buy, or Sell. According to the experiments, trained attention-based models consistently outperform the baseline convolutional architectures, particularly when applied to a subset of frequently traded Exchange-Traded Funds (ETFs). This better performance of attention-based architectures, especially ViT, is evident in terms of both accuracy and other financial evaluation metrics, particularly during extended testing and holding periods. These findings underscore the potential of transformer-based approaches to enhance predictive capabilities in asset price and directional forecasting. Our code and processed datasets are available at <uri>https://github.com/seferlab/price_transformer</uri>.
ISSN:2169-3536