Small nonlinearities in activation functions create bad local minima in neural networks

© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with “slightest” nonlinearity, the empirical risks have spurious local minima in most cases. Our results th...

Full description

Bibliographic Details
Main Authors: Yun, Chulee, Sra, Suvrit, Jadbabaie, Ali
Other Authors: Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/137454
_version_ 1826195944076476416
author Yun, Chulee
Sra, Suvrit
Jadbabaie, Ali
author2 Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
author_facet Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Yun, Chulee
Sra, Suvrit
Jadbabaie, Ali
author_sort Yun, Chulee
collection MIT
description © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with “slightest” nonlinearity, the empirical risks have spurious local minima in most cases. Our results thus indicate that in general “no spurious local minima” is a property limited to deep linear networks, and insights obtained from linear networks may not be robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our results make the least restrictive assumptions relative to existing results on spurious local optima in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic.
first_indexed 2024-09-23T10:18:18Z
format Article
id mit-1721.1/137454
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T10:18:18Z
publishDate 2021
record_format dspace
spelling mit-1721.1/1374542023-07-31T16:51:24Z Small nonlinearities in activation functions create bad local minima in neural networks Yun, Chulee Sra, Suvrit Jadbabaie, Ali Massachusetts Institute of Technology. Laboratory for Information and Decision Systems Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Department of Civil and Environmental Engineering Massachusetts Institute of Technology. Institute for Data, Systems, and Society © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with “slightest” nonlinearity, the empirical risks have spurious local minima in most cases. Our results thus indicate that in general “no spurious local minima” is a property limited to deep linear networks, and insights obtained from linear networks may not be robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our results make the least restrictive assumptions relative to existing results on spurious local optima in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic. 2021-11-05T13:44:44Z 2021-11-05T13:44:44Z 2019 2021-04-12T17:31:46Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137454 Yun, Chulee, Sra, Suvrit and Jadbabaie, Ali. 2019. "Small nonlinearities in activation functions create bad local minima in neural networks." 7th International Conference on Learning Representations, ICLR 2019. en 7th International Conference on Learning Representations, ICLR 2019 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf arXiv
spellingShingle Yun, Chulee
Sra, Suvrit
Jadbabaie, Ali
Small nonlinearities in activation functions create bad local minima in neural networks
title Small nonlinearities in activation functions create bad local minima in neural networks
title_full Small nonlinearities in activation functions create bad local minima in neural networks
title_fullStr Small nonlinearities in activation functions create bad local minima in neural networks
title_full_unstemmed Small nonlinearities in activation functions create bad local minima in neural networks
title_short Small nonlinearities in activation functions create bad local minima in neural networks
title_sort small nonlinearities in activation functions create bad local minima in neural networks
url https://hdl.handle.net/1721.1/137454
work_keys_str_mv AT yunchulee smallnonlinearitiesinactivationfunctionscreatebadlocalminimainneuralnetworks
AT srasuvrit smallnonlinearitiesinactivationfunctionscreatebadlocalminimainneuralnetworks
AT jadbabaieali smallnonlinearitiesinactivationfunctionscreatebadlocalminimainneuralnetworks