Deep vs. shallow networks : An approximation theory perspective
The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The...
Main Authors: | Mhaskar, Hrushikesh, Poggio, Tomaso |
---|---|
Format: | Technical Report |
Language: | en_US |
Published: |
Center for Brains, Minds and Machines (CBMM), arXiv
2016
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/103911 |
Similar Items
-
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?
by: Poggio, Tomaso, et al.
Published: (2016) -
Do Deep Neural Networks Suffer from Crowding?
by: Volokitin, Anna, et al.
Published: (2017) -
An analysis of training and generalization errors in shallow and deep networks
by: Mhaskar, Hrushikesh, et al.
Published: (2018) -
I-theory on depth vs width: hierarchical function composition
by: Poggio, Tomaso, et al.
Published: (2015) -
Deep vs. shallow networks: An approximation theory perspective
by: Mhaskar, HN, et al.
Published: (2021)