A hypothesis about the rate of global convergence for optimal methods (Newtons type) in smooth convex optimization
In this paper we discuss lower bounds for convergence of convex optimization methods of high order and attainability of this bounds. We formulate a hypothesis that covers all the cases. It is noticeable that we provide this statement without a proof. Newton method is the most famous method that uses...
Main Authors: | Alexander Vladimirovich Gasnikov, Dmitry A. Kovalev |
---|---|
Format: | Article |
Language: | Russian |
Published: |
Institute of Computer Science
2018-06-01
|
Series: | Компьютерные исследования и моделирование |
Subjects: | |
Online Access: | http://crm.ics.org.ru/uploads/crmissues/crm_2018_3/2018_01_04.pdf |
Similar Items
-
The global rate of convergence for optimal tensor methods in smooth convex optimization
by: Alexander Vladimirovich Gasnikov, et al.
Published: (2018-12-01) -
On the Convergence Rate of Quasi-Newton Methods on Strongly Convex Functions with Lipschitz Gradient
by: Vladimir Krutikov, et al.
Published: (2023-11-01) -
On the local convergence of the Modified Newton method
by: Măruşter Ştefan
Published: (2019-06-01) -
Local convergence for composite Chebyshev-type methods
by: Santhosh George, et al.
Published: (2018-09-01) -
Convergence Analysis of Weighted-Newton Methods of Optimal Eighth Order in Banach Spaces
by: Janak Raj Sharma, et al.
Published: (2019-02-01)