Norm-Based Generalization Bounds for Compositionally Sparse Neural Network
In this paper, we investigate the Rademacher complexity of deep sparse neural networks, where each neuron receives a small number of inputs. We prove generalization bounds for multilayered sparse ReLU neural networks, including convolutional neural networks. These bounds differ from previous ones, a...
Main Authors: | , , , |
---|---|
Format: | Article |
Published: |
Center for Brains, Minds and Machines (CBMM)
2023
|
Online Access: | https://hdl.handle.net/1721.1/148230 |
_version_ | 1826206887988690944 |
---|---|
author | Galanti, Tomer Xu, Mengjia Galanti, Liane Poggio, Tomaso |
author_facet | Galanti, Tomer Xu, Mengjia Galanti, Liane Poggio, Tomaso |
author_sort | Galanti, Tomer |
collection | MIT |
description | In this paper, we investigate the Rademacher complexity of deep sparse neural networks, where each neuron receives a small number of inputs. We prove generalization bounds for multilayered sparse ReLU neural networks, including convolutional neural networks. These bounds differ from previous ones, as they consider the norms of the convolutional filters instead of the norms of the associated Toeplitz matrices, independently of weight sharing between neurons.
As we show theoretically, these bounds may be orders of magnitude better than standard norm- based generalization bounds and empirically, they are almost non-vacuous in estimating generalization in various simple classification problems. Taken together, these results suggest that compositional sparsity of the underlying target function is critical to the success of deep neural networks. |
first_indexed | 2024-09-23T13:39:51Z |
format | Article |
id | mit-1721.1/148230 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T13:39:51Z |
publishDate | 2023 |
publisher | Center for Brains, Minds and Machines (CBMM) |
record_format | dspace |
spelling | mit-1721.1/1482302023-03-01T03:23:23Z Norm-Based Generalization Bounds for Compositionally Sparse Neural Network Galanti, Tomer Xu, Mengjia Galanti, Liane Poggio, Tomaso In this paper, we investigate the Rademacher complexity of deep sparse neural networks, where each neuron receives a small number of inputs. We prove generalization bounds for multilayered sparse ReLU neural networks, including convolutional neural networks. These bounds differ from previous ones, as they consider the norms of the convolutional filters instead of the norms of the associated Toeplitz matrices, independently of weight sharing between neurons. As we show theoretically, these bounds may be orders of magnitude better than standard norm- based generalization bounds and empirically, they are almost non-vacuous in estimating generalization in various simple classification problems. Taken together, these results suggest that compositional sparsity of the underlying target function is critical to the success of deep neural networks. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. 2023-02-27T18:01:37Z 2023-02-27T18:01:37Z 2023-02-14 Article Technical Report Working Paper https://hdl.handle.net/1721.1/148230 CBMM Memo;139 application/pdf Center for Brains, Minds and Machines (CBMM) |
spellingShingle | Galanti, Tomer Xu, Mengjia Galanti, Liane Poggio, Tomaso Norm-Based Generalization Bounds for Compositionally Sparse Neural Network |
title | Norm-Based Generalization Bounds for Compositionally Sparse Neural Network |
title_full | Norm-Based Generalization Bounds for Compositionally Sparse Neural Network |
title_fullStr | Norm-Based Generalization Bounds for Compositionally Sparse Neural Network |
title_full_unstemmed | Norm-Based Generalization Bounds for Compositionally Sparse Neural Network |
title_short | Norm-Based Generalization Bounds for Compositionally Sparse Neural Network |
title_sort | norm based generalization bounds for compositionally sparse neural network |
url | https://hdl.handle.net/1721.1/148230 |
work_keys_str_mv | AT galantitomer normbasedgeneralizationboundsforcompositionallysparseneuralnetwork AT xumengjia normbasedgeneralizationboundsforcompositionallysparseneuralnetwork AT galantiliane normbasedgeneralizationboundsforcompositionallysparseneuralnetwork AT poggiotomaso normbasedgeneralizationboundsforcompositionallysparseneuralnetwork |