On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems.

It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(ε ?2) to drive the norm of the gradient below ε. This shows that the upper bound...

Full description

Bibliographic Details
Main Authors: Cartis, C, Gould, N, Toint, P
Format: Journal article
Language:English
Published: 2010
Description
Summary:It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(ε ?2) to drive the norm of the gradient below ε. This shows that the upper bound of O(ε ?2) evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of O(ε ?3/2) evaluations known for cubically regularized Newton's methods is also shown to be tight. © 2010 Society for Industrial and Applied Mathematics.