Consensus-based optimization methods converge globally
In this paper we study consensus-based optimization (CBO), which is a multiagent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO pe...
Автори: | , , |
---|---|
Формат: | Journal article |
Мова: | English |
Опубліковано: |
Society for Industrial and Applied Mathematics
2024
|
_version_ | 1826314699272093696 |
---|---|
author | Fornasier, M Klock, T Riedl, K |
author_facet | Fornasier, M Klock, T Riedl, K |
author_sort | Fornasier, M |
collection | OXFORD |
description | In this paper we study consensus-based optimization (CBO), which is a multiagent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring mild assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in mean-field law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the mean-field approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows us to obtain probabilistic global convergence guarantees of the numerical CBO method. |
first_indexed | 2024-12-09T03:09:28Z |
format | Journal article |
id | oxford-uuid:51395a4a-9323-44ed-91fd-2433304279f3 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:09:28Z |
publishDate | 2024 |
publisher | Society for Industrial and Applied Mathematics |
record_format | dspace |
spelling | oxford-uuid:51395a4a-9323-44ed-91fd-2433304279f32024-10-07T09:15:30ZConsensus-based optimization methods converge globally Journal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:51395a4a-9323-44ed-91fd-2433304279f3EnglishSymplectic ElementsSociety for Industrial and Applied Mathematics2024Fornasier, MKlock, TRiedl, KIn this paper we study consensus-based optimization (CBO), which is a multiagent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring mild assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in mean-field law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the mean-field approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows us to obtain probabilistic global convergence guarantees of the numerical CBO method. |
spellingShingle | Fornasier, M Klock, T Riedl, K Consensus-based optimization methods converge globally |
title | Consensus-based optimization methods converge globally
|
title_full | Consensus-based optimization methods converge globally
|
title_fullStr | Consensus-based optimization methods converge globally
|
title_full_unstemmed | Consensus-based optimization methods converge globally
|
title_short | Consensus-based optimization methods converge globally
|
title_sort | consensus based optimization methods converge globally |
work_keys_str_mv | AT fornasierm consensusbasedoptimizationmethodsconvergeglobally AT klockt consensusbasedoptimizationmethodsconvergeglobally AT riedlk consensusbasedoptimizationmethodsconvergeglobally |