The Low-rank Simplicity Bias in Deep Networks
Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? In this work, we make a series of empirical observations th...
Main Author: | Huh, Minyoung |
---|---|
Other Authors: | Isola, Phillip J. |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/144726 |
Similar Items
-
SGD Noise and Implicit Low-Rank Bias in Deep Neural Networks
by: Galanti, Tomer, et al.
Published: (2022) -
SGD and Weight Decay Provably Induce a Low-Rank Bias in Deep Neural Networks
by: Galanti, Tomer, et al.
Published: (2023) -
Examining simplicity.
by: Wong, Ka Man.
Published: (2009) -
Simplicity and Probability Weighting in Choice under Risk
by: Fudenberg, Drew, et al.
Published: (2022) -
Surprising simplicity in the modeling of dynamic granular intrusion
by: Agarwal, Shashank, et al.
Published: (2022)