Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series
Reservoir computing has shown promising results in predicting chaotic time series. However, the main challenges of time-series predictions are associated with reducing computational costs and increasing the prediction horizon. In this sense, we propose the optimization of Echo State Networks (ESN),...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-10-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/10/20/3886 |
_version_ | 1797471704992186368 |
---|---|
author | Astrid Maritza González-Zapata Esteban Tlelo-Cuautle Brisbane Ovilla-Martinez Israel Cruz-Vega Luis Gerardo De la Fraga |
author_facet | Astrid Maritza González-Zapata Esteban Tlelo-Cuautle Brisbane Ovilla-Martinez Israel Cruz-Vega Luis Gerardo De la Fraga |
author_sort | Astrid Maritza González-Zapata |
collection | DOAJ |
description | Reservoir computing has shown promising results in predicting chaotic time series. However, the main challenges of time-series predictions are associated with reducing computational costs and increasing the prediction horizon. In this sense, we propose the optimization of Echo State Networks (ESN), where the main goal is to increase the prediction horizon using a lower count number of neurons compared with state-of-the-art models. In addition, we show that the application of the decimation technique allows us to emulate an increase in the prediction of up to 10,000 steps ahead. The optimization is performed by applying particle swarm optimization and considering two chaotic systems as case studies, namely the chaotic Hindmarsh–Rose neuron with slow dynamic behavior and the well-known Lorenz system. The results show that although similar works used from 200 to 5000 neurons in the reservoir of the ESN to predict from 120 to 700 steps ahead, our optimized ESN including decimation used 100 neurons in the reservoir, with a capability of predicting up to 10,000 steps ahead. The main conclusion is that we ensured larger prediction horizons compared to recent works, achieving an improvement of more than one order of magnitude, and the computational costs were greatly reduced. |
first_indexed | 2024-03-09T19:52:00Z |
format | Article |
id | doaj.art-350cc28f6cac41388223ccce10c6c968 |
institution | Directory Open Access Journal |
issn | 2227-7390 |
language | English |
last_indexed | 2024-03-09T19:52:00Z |
publishDate | 2022-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Mathematics |
spelling | doaj.art-350cc28f6cac41388223ccce10c6c9682023-11-24T01:08:00ZengMDPI AGMathematics2227-73902022-10-011020388610.3390/math10203886Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time SeriesAstrid Maritza González-Zapata0Esteban Tlelo-Cuautle1Brisbane Ovilla-Martinez2Israel Cruz-Vega3Luis Gerardo De la Fraga4Department of Electronics, INAOE, Tonantzintla, Puebla 72840, MexicoDepartment of Electronics, INAOE, Tonantzintla, Puebla 72840, MexicoComputer Science Department, CINVESTAV, Av. IPN 2508, Mexico City 07360, MexicoDepartment of Electronics, INAOE, Tonantzintla, Puebla 72840, MexicoComputer Science Department, CINVESTAV, Av. IPN 2508, Mexico City 07360, MexicoReservoir computing has shown promising results in predicting chaotic time series. However, the main challenges of time-series predictions are associated with reducing computational costs and increasing the prediction horizon. In this sense, we propose the optimization of Echo State Networks (ESN), where the main goal is to increase the prediction horizon using a lower count number of neurons compared with state-of-the-art models. In addition, we show that the application of the decimation technique allows us to emulate an increase in the prediction of up to 10,000 steps ahead. The optimization is performed by applying particle swarm optimization and considering two chaotic systems as case studies, namely the chaotic Hindmarsh–Rose neuron with slow dynamic behavior and the well-known Lorenz system. The results show that although similar works used from 200 to 5000 neurons in the reservoir of the ESN to predict from 120 to 700 steps ahead, our optimized ESN including decimation used 100 neurons in the reservoir, with a capability of predicting up to 10,000 steps ahead. The main conclusion is that we ensured larger prediction horizons compared to recent works, achieving an improvement of more than one order of magnitude, and the computational costs were greatly reduced.https://www.mdpi.com/2227-7390/10/20/3886chaosecho state networkHindmarsh–Rose neuronLorenz systemtime-series predictiondecimation |
spellingShingle | Astrid Maritza González-Zapata Esteban Tlelo-Cuautle Brisbane Ovilla-Martinez Israel Cruz-Vega Luis Gerardo De la Fraga Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series Mathematics chaos echo state network Hindmarsh–Rose neuron Lorenz system time-series prediction decimation |
title | Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series |
title_full | Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series |
title_fullStr | Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series |
title_full_unstemmed | Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series |
title_short | Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series |
title_sort | optimizing echo state networks for enhancing large prediction horizons of chaotic time series |
topic | chaos echo state network Hindmarsh–Rose neuron Lorenz system time-series prediction decimation |
url | https://www.mdpi.com/2227-7390/10/20/3886 |
work_keys_str_mv | AT astridmaritzagonzalezzapata optimizingechostatenetworksforenhancinglargepredictionhorizonsofchaotictimeseries AT estebantlelocuautle optimizingechostatenetworksforenhancinglargepredictionhorizonsofchaotictimeseries AT brisbaneovillamartinez optimizingechostatenetworksforenhancinglargepredictionhorizonsofchaotictimeseries AT israelcruzvega optimizingechostatenetworksforenhancinglargepredictionhorizonsofchaotictimeseries AT luisgerardodelafraga optimizingechostatenetworksforenhancinglargepredictionhorizonsofchaotictimeseries |