How consensus-based optimization can be interpreted as a stochastic relaxation of gradient descent
We provide a novel analytical perspective on the theoretical understanding of gradient-based learning algorithms by interpreting consensus-based optimization (CBO), a recently proposed multi-particle derivative-free optimization method, as a stochastic relaxation of gradient descent. Remarkably, we...
Hauptverfasser: | Riedl, K, Klock, T, Geldhauser, C, Fornasier, M |
---|---|
Format: | Conference item |
Sprache: | English |
Veröffentlicht: |
OpenReview
2024
|
Ähnliche Einträge
Ähnliche Einträge
-
Consensus-based optimization methods converge globally
von: Fornasier, M, et al.
Veröffentlicht: (2024) -
Convergence of anisotropic consensus-based optimization in mean-field law
von: Fornasier, M, et al.
Veröffentlicht: (2022) -
Carathéodory sampling for stochastic gradient descent
von: Cosentino, F, et al.
Veröffentlicht: (2020) -
Carathéodory sampling for stochastic gradient descent
von: Cosentino, F, et al.
Veröffentlicht: (2020) -
Byzantine-resilient decentralized stochastic gradient descent
von: Guo, Shangwei, et al.
Veröffentlicht: (2024)