Effect of Depth and Width on Local Minima in Deep Learning

© 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically...

Full description

Bibliographic Details
Main Authors: Kawaguchi, Kenji, Huang, Jiaoyang, Kaelbling, Leslie Pack
Format: Article
Language:English
Published: MIT Press - Journals 2021
Online Access:https://hdl.handle.net/1721.1/136181
_version_ 1811072741988630528
author Kawaguchi, Kenji
Huang, Jiaoyang
Kaelbling, Leslie Pack
author_facet Kawaguchi, Kenji
Huang, Jiaoyang
Kaelbling, Leslie Pack
author_sort Kawaguchi, Kenji
collection MIT
description © 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning.
first_indexed 2024-09-23T09:11:33Z
format Article
id mit-1721.1/136181
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T09:11:33Z
publishDate 2021
publisher MIT Press - Journals
record_format dspace
spelling mit-1721.1/1361812022-03-31T14:23:08Z Effect of Depth and Width on Local Minima in Deep Learning Kawaguchi, Kenji Huang, Jiaoyang Kaelbling, Leslie Pack © 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning. 2021-10-27T20:34:08Z 2021-10-27T20:34:08Z 2019 2019-06-04T15:57:34Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/136181 en 10.1162/neco_a_01195 Neural Computation Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf MIT Press - Journals MIT Press
spellingShingle Kawaguchi, Kenji
Huang, Jiaoyang
Kaelbling, Leslie Pack
Effect of Depth and Width on Local Minima in Deep Learning
title Effect of Depth and Width on Local Minima in Deep Learning
title_full Effect of Depth and Width on Local Minima in Deep Learning
title_fullStr Effect of Depth and Width on Local Minima in Deep Learning
title_full_unstemmed Effect of Depth and Width on Local Minima in Deep Learning
title_short Effect of Depth and Width on Local Minima in Deep Learning
title_sort effect of depth and width on local minima in deep learning
url https://hdl.handle.net/1721.1/136181
work_keys_str_mv AT kawaguchikenji effectofdepthandwidthonlocalminimaindeeplearning
AT huangjiaoyang effectofdepthandwidthonlocalminimaindeeplearning
AT kaelblinglesliepack effectofdepthandwidthonlocalminimaindeeplearning