Dimension reduction in recurrent networks by canonicalization

Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent n...

Full description

Bibliographic Details
Main Authors: Grigoryeva, Lyudmila, Ortega, Juan-Pablo
Other Authors: School of Physical and Mathematical Sciences
Format: Journal Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/161577
_version_ 1826110087170621440
author Grigoryeva, Lyudmila
Ortega, Juan-Pablo
author2 School of Physical and Mathematical Sciences
author_facet School of Physical and Mathematical Sciences
Grigoryeva, Lyudmila
Ortega, Juan-Pablo
author_sort Grigoryeva, Lyudmila
collection NTU
description Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent networks setup. The so-called input forgetting property is identified as the key hypothesis that guarantees the existence and uniqueness (up to system isomorphisms) of canonical realizations for causal and time-invariant input/output systems with semi-infinite inputs. Additionally, the notion of optimal reduction coming from the theory of symmetric Hamiltonian systems is implemented in our setup to construct canonical realizations out of input forgetting but not necessarily canonical ones. These two procedures are studied in detail in the framework of linear fading memory input/output systems. Finally, the notion of implicit reduction using reproducing kernel Hilbert spaces (RKHS) is introduced which allows, for systems with linear readouts, to achieve dimension reduction without the need to actually compute the reduced spaces introduced in the first part of the paper.
first_indexed 2024-10-01T02:29:03Z
format Journal Article
id ntu-10356/161577
institution Nanyang Technological University
language English
last_indexed 2024-10-01T02:29:03Z
publishDate 2022
record_format dspace
spelling ntu-10356/1615772023-02-28T20:07:39Z Dimension reduction in recurrent networks by canonicalization Grigoryeva, Lyudmila Ortega, Juan-Pablo School of Physical and Mathematical Sciences Science::Mathematics Recurrent Neural Network Reservoir Computing Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent networks setup. The so-called input forgetting property is identified as the key hypothesis that guarantees the existence and uniqueness (up to system isomorphisms) of canonical realizations for causal and time-invariant input/output systems with semi-infinite inputs. Additionally, the notion of optimal reduction coming from the theory of symmetric Hamiltonian systems is implemented in our setup to construct canonical realizations out of input forgetting but not necessarily canonical ones. These two procedures are studied in detail in the framework of linear fading memory input/output systems. Finally, the notion of implicit reduction using reproducing kernel Hilbert spaces (RKHS) is introduced which allows, for systems with linear readouts, to achieve dimension reduction without the need to actually compute the reduced spaces introduced in the first part of the paper. Submitted/Accepted version JPO acknowledges partial financial support coming from the Research Commission of the Universit¨at Sankt Gallen and the Swiss National Science Foundation (grant number 200021 175801/1). 2022-09-09T02:23:10Z 2022-09-09T02:23:10Z 2021 Journal Article Grigoryeva, L. & Ortega, J. (2021). Dimension reduction in recurrent networks by canonicalization. Journal of Geometric Mechanics, 13(4), 647-677. https://dx.doi.org/10.3934/jgm.2021028 1941-4889 https://hdl.handle.net/10356/161577 10.3934/jgm.2021028 2-s2.0-85122382302 4 13 647 677 en Journal of Geometric Mechanics © 2022 American Institute of Mathematical Sciences. All rights reserved. This article has been published in a revised form in Journal of Geometric Mechanics (http://dx.doi.org/10.3934/jgm.2021028). This version is free to download for private research and study only. Not for redistribution, re-sale or use in derivative works. application/pdf
spellingShingle Science::Mathematics
Recurrent Neural Network
Reservoir Computing
Grigoryeva, Lyudmila
Ortega, Juan-Pablo
Dimension reduction in recurrent networks by canonicalization
title Dimension reduction in recurrent networks by canonicalization
title_full Dimension reduction in recurrent networks by canonicalization
title_fullStr Dimension reduction in recurrent networks by canonicalization
title_full_unstemmed Dimension reduction in recurrent networks by canonicalization
title_short Dimension reduction in recurrent networks by canonicalization
title_sort dimension reduction in recurrent networks by canonicalization
topic Science::Mathematics
Recurrent Neural Network
Reservoir Computing
url https://hdl.handle.net/10356/161577
work_keys_str_mv AT grigoryevalyudmila dimensionreductioninrecurrentnetworksbycanonicalization
AT ortegajuanpablo dimensionreductioninrecurrentnetworksbycanonicalization