How consensus-based optimization can be interpreted as a stochastic relaxation of gradient descent
We provide a novel analytical perspective on the theoretical understanding of gradient-based learning algorithms by interpreting consensus-based optimization (CBO), a recently proposed multi-particle derivative-free optimization method, as a stochastic relaxation of gradient descent. Remarkably, we...
Main Authors: | Riedl, K, Klock, T, Geldhauser, C, Fornasier, M |
---|---|
Format: | Conference item |
Language: | English |
Published: |
OpenReview
2024
|
Similar Items
-
Consensus-based optimization methods converge globally
by: Fornasier, M, et al.
Published: (2024) -
Convergence of anisotropic consensus-based optimization in mean-field law
by: Fornasier, M, et al.
Published: (2022) -
Carathéodory sampling for stochastic gradient descent
by: Cosentino, F, et al.
Published: (2020) -
Carathéodory sampling for stochastic gradient descent
by: Cosentino, F, et al.
Published: (2020) -
Byzantine-resilient decentralized stochastic gradient descent
by: Guo, Shangwei, et al.
Published: (2024)