Taming the Chaos in Neural Network Time Series Predictions

Machine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which...

Full description

Bibliographic Details
Main Authors: Sebastian Raubitzek, Thomas Neubauer
Format: Article
Language:English
Published: MDPI AG 2021-10-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/23/11/1424
_version_ 1827676695769907200
author Sebastian Raubitzek
Thomas Neubauer
author_facet Sebastian Raubitzek
Thomas Neubauer
author_sort Sebastian Raubitzek
collection DOAJ
description Machine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which we will refer to as complexity measures throughout this research. First, we interpolate the time series data under study. Next, we predict the time series data using an ensemble of randomly parameterized LSTM neural networks. Finally, we filter the ensemble prediction based on the original data complexity to improve the predictability, i.e., we keep only predictions with a complexity close to that of the training data. We test the proposed approach on five different univariate time series data. We use linear and fractal interpolation to increase the amount of data. We tested five different complexity measures for the ensemble filters for time series data, i.e., the Hurst exponent, Shannon’s entropy, Fisher’s information, SVD entropy, and the spectrum of Lyapunov exponents. Our results show that the interpolated predictions consistently outperformed the non-interpolated ones. The best ensemble predictions always beat a baseline prediction based on a neural network with only a single hidden LSTM, gated recurrent unit (GRU) or simple recurrent neural network (RNN) layer. The complexity filters can reduce the error of a random ensemble prediction by a factor of 10. Further, because we use randomly parameterized neural networks, no hyperparameter tuning is required. We prove this method useful for real-time time series prediction because the optimization of hyperparameters, which is usually very costly and time-intensive, can be circumvented with the presented approach.
first_indexed 2024-03-10T05:31:55Z
format Article
id doaj.art-dc107a1fc2394f5baecd7e9e1e68e646
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-10T05:31:55Z
publishDate 2021-10-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-dc107a1fc2394f5baecd7e9e1e68e6462023-11-22T23:14:46ZengMDPI AGEntropy1099-43002021-10-012311142410.3390/e23111424Taming the Chaos in Neural Network Time Series PredictionsSebastian Raubitzek0Thomas Neubauer1Information and Software Engineering Group, Institute of Information Systems Engineering, Faculty of Informatics, TU Wien, Favoritenstrasse 9-11/194, 1040 Vienna, AustriaInformation and Software Engineering Group, Institute of Information Systems Engineering, Faculty of Informatics, TU Wien, Favoritenstrasse 9-11/194, 1040 Vienna, AustriaMachine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which we will refer to as complexity measures throughout this research. First, we interpolate the time series data under study. Next, we predict the time series data using an ensemble of randomly parameterized LSTM neural networks. Finally, we filter the ensemble prediction based on the original data complexity to improve the predictability, i.e., we keep only predictions with a complexity close to that of the training data. We test the proposed approach on five different univariate time series data. We use linear and fractal interpolation to increase the amount of data. We tested five different complexity measures for the ensemble filters for time series data, i.e., the Hurst exponent, Shannon’s entropy, Fisher’s information, SVD entropy, and the spectrum of Lyapunov exponents. Our results show that the interpolated predictions consistently outperformed the non-interpolated ones. The best ensemble predictions always beat a baseline prediction based on a neural network with only a single hidden LSTM, gated recurrent unit (GRU) or simple recurrent neural network (RNN) layer. The complexity filters can reduce the error of a random ensemble prediction by a factor of 10. Further, because we use randomly parameterized neural networks, no hyperparameter tuning is required. We prove this method useful for real-time time series prediction because the optimization of hyperparameters, which is usually very costly and time-intensive, can be circumvented with the presented approach.https://www.mdpi.com/1099-4300/23/11/1424Hurst exponentchaosLyapunov exponentsneural networkstime series predictiondeep learning
spellingShingle Sebastian Raubitzek
Thomas Neubauer
Taming the Chaos in Neural Network Time Series Predictions
Entropy
Hurst exponent
chaos
Lyapunov exponents
neural networks
time series prediction
deep learning
title Taming the Chaos in Neural Network Time Series Predictions
title_full Taming the Chaos in Neural Network Time Series Predictions
title_fullStr Taming the Chaos in Neural Network Time Series Predictions
title_full_unstemmed Taming the Chaos in Neural Network Time Series Predictions
title_short Taming the Chaos in Neural Network Time Series Predictions
title_sort taming the chaos in neural network time series predictions
topic Hurst exponent
chaos
Lyapunov exponents
neural networks
time series prediction
deep learning
url https://www.mdpi.com/1099-4300/23/11/1424
work_keys_str_mv AT sebastianraubitzek tamingthechaosinneuralnetworktimeseriespredictions
AT thomasneubauer tamingthechaosinneuralnetworktimeseriespredictions