Classical generalization bounds are surprisingly tight for Deep Networks
Deep networks are usually trained and tested in a regime in which the training classification error is not a good predictor of the test error. Thus the consensus has been that generalization, defined as convergence of the empirical to the expected error, does not hold for deep networks. Here we show...
Main Authors: | Liao, Qianli, Miranda, Brando, Hidary, Jack, Poggio, Tomaso |
---|---|
Format: | Technical Report |
Language: | en_US |
Published: |
Center for Brains, Minds and Machines (CBMM)
2018
|
Online Access: | http://hdl.handle.net/1721.1/116911 |
Similar Items
-
Theory IIIb: Generalization in Deep Networks
by: Poggio, Tomaso, et al.
Published: (2018) -
Theory of Deep Learning III: explaining the non-overfitting puzzle
by: Poggio, Tomaso, et al.
Published: (2018) -
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?
by: Poggio, Tomaso, et al.
Published: (2016) -
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
by: Mhaskar, Hrushikesh, et al.
Published: (2017) -
Implicit dynamic regularization in deep networks
by: Poggio, Tomaso, et al.
Published: (2020)