Escaping saddle points in constrained optimization
In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective fu...
Main Authors: | Mokhtari, Aryan, Ozdaglar, Asuman E, Jadbabaie-Moghadam, Ali |
---|---|
Other Authors: | Massachusetts Institute of Technology. Laboratory for Information and Decision Systems |
Format: | Article |
Language: | English |
Published: |
Neural Information Processing Systems Foundation
2019
|
Online Access: | https://hdl.handle.net/1721.1/121540 |
Similar Items
-
Convergence Rate of O(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
by: Mokhtari, Aryan, et al.
Published: (2022) -
Efficiently testing local optimality and escaping saddles for RELu networks
by: Jadbabaie, Ali, et al.
Published: (2021) -
Escaping Saddle Points with Adaptive Gradient Methods
by: Staib, Matthew, et al.
Published: (2021) -
On stability in the saddle-point sense
by: Levhari, David, et al.
Published: (2011) -
Pseudonormality and a language multiplier theory for constrained optimization
by: Ozdaglar, Asuman E
Published: (2005)