Random vector functional link networks for function approximation on manifolds

The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network...

Full description

Bibliographic Details
Main Authors: Deanna Needell, Aaron A. Nelson, Rayan Saab, Palina Salanevich, Olov Schavemaker
Format: Article
Language:English
Published: Frontiers Media S.A. 2024-04-01
Series:Frontiers in Applied Mathematics and Statistics
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fams.2024.1284706/full
_version_ 1827278672916119552
author Deanna Needell
Aaron A. Nelson
Rayan Saab
Palina Salanevich
Olov Schavemaker
author_facet Deanna Needell
Aaron A. Nelson
Rayan Saab
Palina Salanevich
Olov Schavemaker
author_sort Deanna Needell
collection DOAJ
description The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of Igelnik and Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. In this study, we begin to fill this theoretical gap. We then extend this result to the non-asymptotic setting using a concentration inequality for Monte-Carlo integral approximations. We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error squared decaying asymptotically like O(1/n) for the number n of network nodes. We then extend this result to the non-asymptotic setting, proving that one can achieve any desired approximation error with high probability provided n is sufficiently large. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic forms. Finally, we illustrate our results on manifolds with numerical experiments.
first_indexed 2024-04-24T08:04:02Z
format Article
id doaj.art-93aa007b2d714c50bbb289b0bfe02f4c
institution Directory Open Access Journal
issn 2297-4687
language English
last_indexed 2024-04-24T08:04:02Z
publishDate 2024-04-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Applied Mathematics and Statistics
spelling doaj.art-93aa007b2d714c50bbb289b0bfe02f4c2024-04-17T13:18:02ZengFrontiers Media S.A.Frontiers in Applied Mathematics and Statistics2297-46872024-04-011010.3389/fams.2024.12847061284706Random vector functional link networks for function approximation on manifoldsDeanna Needell0Aaron A. Nelson1Rayan Saab2Palina Salanevich3Olov Schavemaker4Department of Mathematics, University of California, Los Angeles, Los Angeles, CA, United StatesDepartment of Mathematical Sciences, United States Air Force Academy, Colorado Springs, CO, United StatesDepartment of Mathematics and Halıcıoğlu Data Science Institute, University of California, San Diego, San Diego, CA, United StatesMathematical Institute, Utrecht University, Utrecht, NetherlandsMathematical Institute, Utrecht University, Utrecht, NetherlandsThe learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of Igelnik and Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. In this study, we begin to fill this theoretical gap. We then extend this result to the non-asymptotic setting using a concentration inequality for Monte-Carlo integral approximations. We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error squared decaying asymptotically like O(1/n) for the number n of network nodes. We then extend this result to the non-asymptotic setting, proving that one can achieve any desired approximation error with high probability provided n is sufficiently large. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic forms. Finally, we illustrate our results on manifolds with numerical experiments.https://www.frontiersin.org/articles/10.3389/fams.2024.1284706/fullmachine learningfeed-forward neural networksfunction approximationsmooth manifoldrandom vector functional link
spellingShingle Deanna Needell
Aaron A. Nelson
Rayan Saab
Palina Salanevich
Olov Schavemaker
Random vector functional link networks for function approximation on manifolds
Frontiers in Applied Mathematics and Statistics
machine learning
feed-forward neural networks
function approximation
smooth manifold
random vector functional link
title Random vector functional link networks for function approximation on manifolds
title_full Random vector functional link networks for function approximation on manifolds
title_fullStr Random vector functional link networks for function approximation on manifolds
title_full_unstemmed Random vector functional link networks for function approximation on manifolds
title_short Random vector functional link networks for function approximation on manifolds
title_sort random vector functional link networks for function approximation on manifolds
topic machine learning
feed-forward neural networks
function approximation
smooth manifold
random vector functional link
url https://www.frontiersin.org/articles/10.3389/fams.2024.1284706/full
work_keys_str_mv AT deannaneedell randomvectorfunctionallinknetworksforfunctionapproximationonmanifolds
AT aaronanelson randomvectorfunctionallinknetworksforfunctionapproximationonmanifolds
AT rayansaab randomvectorfunctionallinknetworksforfunctionapproximationonmanifolds
AT palinasalanevich randomvectorfunctionallinknetworksforfunctionapproximationonmanifolds
AT olovschavemaker randomvectorfunctionallinknetworksforfunctionapproximationonmanifolds