Summary: | Training neural network architectures on Internet-scale datasets has led to many recent advances in machine learning. However, the mechanisms underlying how neural networks learn from data are largely opaque. This thesis develops a mechanistic understanding of how neural networks learn in several settings, as well as new tools to analyze trained networks. First, we study data where the labels depend on an unknown low-dimensional subspace of the input (i.e., the multi-index setting). We identify the “leap complexity”, which is a quantity that we argue characterizes how much data networks need in order to learn. Our analysis reveals a saddle-to-saddle dynamic in network training, where training alternates between loss plateaus and sharp drops in the loss. Furthermore, we show that network weights evolve such that the trained weights are a low-rank perturbation of the original weights. We observe this effect empirically in state-of-the-art transformer models trained on image and vision data. Second, we study the ability of language models to learn to reason. On a family of “relational reasoning” tasks, we prove that modern transformers learn to reason with enough data, but classical fully-connected architectures do not. Our analysis suggests small architectural modifications that improve data efficiency. Finally, we construct new tools to interpret trained networks. These are: (a) a definition of distance between two models that captures their functional similarity, and (b) a distillation algorithm to efficiently extract interpretable decision-tree structure from a trained model when possible.
|