Escaping saddle points in constrained optimization

In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective fu...

पूर्ण विवरण

ग्रंथसूची विवरण
मुख्य लेखकों: Mokhtari, Aryan, Ozdaglar, Asuman E, Jadbabaie-Moghadam, Ali
अन्य लेखक: Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
स्वरूप: लेख
भाषा:English
प्रकाशित: Neural Information Processing Systems Foundation 2019
ऑनलाइन पहुंच:https://hdl.handle.net/1721.1/121540
_version_ 1826201984328269824
author Mokhtari, Aryan
Ozdaglar, Asuman E
Jadbabaie-Moghadam, Ali
author2 Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
author_facet Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
Mokhtari, Aryan
Ozdaglar, Asuman E
Jadbabaie-Moghadam, Ali
author_sort Mokhtari, Aryan
collection MIT
description In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective function. Specifically, our results hold if one can find a ρ-approximate solution of a quadratic program subject to C in polynomial time, where ρ < 1 is a positive constant that depends on the structure of the set C. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an (ε, γ)-second order stationary point (SOSP) in at most O(max{ε- 2 , ρ -3 γ -3 }) iterations. We further characterize the overall complexity of reaching an SOSP when the convex set C can be written as a set of quadratic constraints and the objective function Hessian has a specific structure over the convex set C. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an (ε, γ)-SOSP.
first_indexed 2024-09-23T12:00:05Z
format Article
id mit-1721.1/121540
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:00:05Z
publishDate 2019
publisher Neural Information Processing Systems Foundation
record_format dspace
spelling mit-1721.1/1215402022-10-01T07:32:46Z Escaping saddle points in constrained optimization Mokhtari, Aryan Ozdaglar, Asuman E Jadbabaie-Moghadam, Ali Massachusetts Institute of Technology. Laboratory for Information and Decision Systems Massachusetts Institute of Technology. Department of Civil and Environmental Engineering In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set C. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set C is simple for a quadratic objective function. Specifically, our results hold if one can find a ρ-approximate solution of a quadratic program subject to C in polynomial time, where ρ < 1 is a positive constant that depends on the structure of the set C. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an (ε, γ)-second order stationary point (SOSP) in at most O(max{ε- 2 , ρ -3 γ -3 }) iterations. We further characterize the overall complexity of reaching an SOSP when the convex set C can be written as a set of quadratic constraints and the objective function Hessian has a specific structure over the convex set C. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an (ε, γ)-SOSP. United States. Defense Advanced Research Projects Agency. Lagrange Program United States. Office of Naval Research. Basic Research Challenge 2019-07-09T15:27:53Z 2019-07-09T15:27:53Z 2018-12 2018-10 2019-06-28T16:19:24Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/121540 Mokhtari, Aryan, Asuman Ozdaglar and Ali Jadbabaie. "Escaping Saddle Points in Constrained Optimization." 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal Canada, Dec. 3-8, 2018. en https://papers.nips.cc/paper/7621-escaping-saddle-points-in-constrained-optimization 32nd Conference on Neural Information Processing Systems (NeurIPS 2018) Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems Foundation Neural Information Processing Systems (NIPS)
spellingShingle Mokhtari, Aryan
Ozdaglar, Asuman E
Jadbabaie-Moghadam, Ali
Escaping saddle points in constrained optimization
title Escaping saddle points in constrained optimization
title_full Escaping saddle points in constrained optimization
title_fullStr Escaping saddle points in constrained optimization
title_full_unstemmed Escaping saddle points in constrained optimization
title_short Escaping saddle points in constrained optimization
title_sort escaping saddle points in constrained optimization
url https://hdl.handle.net/1721.1/121540
work_keys_str_mv AT mokhtariaryan escapingsaddlepointsinconstrainedoptimization
AT ozdaglarasumane escapingsaddlepointsinconstrainedoptimization
AT jadbabaiemoghadamali escapingsaddlepointsinconstrainedoptimization