Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients

The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximat...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Cartis, C, Gould, N, Toint, P
Định dạng: Journal article
Được phát hành: Taylor and Francis 2017
_version_ 1826300394568941568
author Cartis, C
Gould, N
Toint, P
author_facet Cartis, C
Gould, N
Toint, P
author_sort Cartis, C
collection OXFORD
description The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis covers the entire range of meaningful powers in the regularization term as well as in the Hölder exponent for the gradient. The resulting complexity bounds vary according to the regularization power and the assumed Hölder exponent, recovering known results when available.
first_indexed 2024-03-07T05:16:30Z
format Journal article
id oxford-uuid:dd62cc80-4bf3-475e-b2e5-4381e745d8b0
institution University of Oxford
last_indexed 2024-03-07T05:16:30Z
publishDate 2017
publisher Taylor and Francis
record_format dspace
spelling oxford-uuid:dd62cc80-4bf3-475e-b2e5-4381e745d8b02022-03-27T09:24:40ZWorst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradientsJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:dd62cc80-4bf3-475e-b2e5-4381e745d8b0Symplectic Elements at OxfordTaylor and Francis2017Cartis, CGould, NToint, PThe worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis covers the entire range of meaningful powers in the regularization term as well as in the Hölder exponent for the gradient. The resulting complexity bounds vary according to the regularization power and the assumed Hölder exponent, recovering known results when available.
spellingShingle Cartis, C
Gould, N
Toint, P
Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
title Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
title_full Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
title_fullStr Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
title_full_unstemmed Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
title_short Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
title_sort worst case evaluation complexity of regularization methods for smooth unconstrained optimization using holder continuous gradients
work_keys_str_mv AT cartisc worstcaseevaluationcomplexityofregularizationmethodsforsmoothunconstrainedoptimizationusingholdercontinuousgradients
AT gouldn worstcaseevaluationcomplexityofregularizationmethodsforsmoothunconstrainedoptimizationusingholdercontinuousgradients
AT tointp worstcaseevaluationcomplexityofregularizationmethodsforsmoothunconstrainedoptimizationusingholdercontinuousgradients