SGD Noise and Implicit Low-Rank Bias in Deep Neural Networks

We analyze deep ReLU neural networks trained with mini-batch stochastic gradient decent and weight decay. We prove that the source of the SGD noise is an implicit low rank constraint across all of the weight matrices within the network. Furthermore, we show, both theoretically and empirically, that...

Full description

Bibliographic Details
Main Authors: Galanti, Tomer, Poggio, Tomaso
Format: Article
Published: Center for Brains, Minds and Machines (CBMM) 2022
Online Access:https://hdl.handle.net/1721.1/141380
Description
Summary:We analyze deep ReLU neural networks trained with mini-batch stochastic gradient decent and weight decay. We prove that the source of the SGD noise is an implicit low rank constraint across all of the weight matrices within the network. Furthermore, we show, both theoretically and empirically, that when training a neural network using Stochastic Gradient Descent (SGD) with a small batch size, the resulting weight matrices are expected to be of small rank. Our analysis relies on a minimal set of assumptions and the neural networks may include convolutional layers, residual connections, as well as batch normalization layers.