An analysis of training and generalization errors in shallow and deep networks

© 2019 Elsevier Ltd This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when ea...

Full description

Bibliographic Details
Main Authors: Mhaskar, HN, Poggio, T
Other Authors: Center for Brains, Minds, and Machines
Format: Article
Language:English
Published: Elsevier BV 2021
Online Access:https://hdl.handle.net/1721.1/138295
_version_ 1826217750525116416
author Mhaskar, HN
Poggio, T
author2 Center for Brains, Minds, and Machines
author_facet Center for Brains, Minds, and Machines
Mhaskar, HN
Poggio, T
author_sort Mhaskar, HN
collection MIT
description © 2019 Elsevier Ltd This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.
first_indexed 2024-09-23T17:08:33Z
format Article
id mit-1721.1/138295
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T17:08:33Z
publishDate 2021
publisher Elsevier BV
record_format dspace
spelling mit-1721.1/1382952023-06-28T19:55:25Z An analysis of training and generalization errors in shallow and deep networks Mhaskar, HN Poggio, T Center for Brains, Minds, and Machines McGovern Institute for Brain Research at MIT © 2019 Elsevier Ltd This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data. 2021-12-02T19:58:25Z 2021-12-02T19:58:25Z 2020 2021-12-02T19:51:52Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/138295 Mhaskar, HN and Poggio, T. 2020. "An analysis of training and generalization errors in shallow and deep networks." Neural Networks, 121. en 10.1016/J.NEUNET.2019.08.028 Neural Networks Creative Commons Attribution-NonCommercial-NoDerivs License http://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf Elsevier BV arXiv
spellingShingle Mhaskar, HN
Poggio, T
An analysis of training and generalization errors in shallow and deep networks
title An analysis of training and generalization errors in shallow and deep networks
title_full An analysis of training and generalization errors in shallow and deep networks
title_fullStr An analysis of training and generalization errors in shallow and deep networks
title_full_unstemmed An analysis of training and generalization errors in shallow and deep networks
title_short An analysis of training and generalization errors in shallow and deep networks
title_sort analysis of training and generalization errors in shallow and deep networks
url https://hdl.handle.net/1721.1/138295
work_keys_str_mv AT mhaskarhn ananalysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks
AT poggiot ananalysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks
AT mhaskarhn analysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks
AT poggiot analysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks