An analysis of training and generalization errors in shallow and deep networks

This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a p...

Full description

Bibliographic Details
Main Authors: Mhaskar, H.N., Poggio, Tomaso
Format: Technical Report
Published: Center for Brains, Minds and Machines (CBMM), arXiv.org 2019
Online Access:https://hdl.handle.net/1721.1/121183
_version_ 1826215221347221504
author Mhaskar, H.N.
Poggio, Tomaso
author_facet Mhaskar, H.N.
Poggio, Tomaso
author_sort Mhaskar, H.N.
collection MIT
description This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.
first_indexed 2024-09-23T16:19:20Z
format Technical Report
id mit-1721.1/121183
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T16:19:20Z
publishDate 2019
publisher Center for Brains, Minds and Machines (CBMM), arXiv.org
record_format dspace
spelling mit-1721.1/1211832019-12-02T08:11:05Z An analysis of training and generalization errors in shallow and deep networks Mhaskar, H.N. Poggio, Tomaso This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. 2019-05-31T15:32:43Z 2019-05-31T15:32:43Z 2019-05-30 Technical Report Working Paper Other https://hdl.handle.net/1721.1/121183 CBMM Memo Series;098 application/pdf application/pdf Center for Brains, Minds and Machines (CBMM), arXiv.org
spellingShingle Mhaskar, H.N.
Poggio, Tomaso
An analysis of training and generalization errors in shallow and deep networks
title An analysis of training and generalization errors in shallow and deep networks
title_full An analysis of training and generalization errors in shallow and deep networks
title_fullStr An analysis of training and generalization errors in shallow and deep networks
title_full_unstemmed An analysis of training and generalization errors in shallow and deep networks
title_short An analysis of training and generalization errors in shallow and deep networks
title_sort analysis of training and generalization errors in shallow and deep networks
url https://hdl.handle.net/1721.1/121183
work_keys_str_mv AT mhaskarhn ananalysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks
AT poggiotomaso ananalysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks
AT mhaskarhn analysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks
AT poggiotomaso analysisoftrainingandgeneralizationerrorsinshallowanddeepnetworks