How consensus-based optimization can be interpreted as a stochastic relaxation of gradient descent

We provide a novel analytical perspective on the theoretical understanding of gradient-based learning algorithms by interpreting consensus-based optimization (CBO), a recently proposed multi-particle derivative-free optimization method, as a stochastic relaxation of gradient descent. Remarkably, we...

Ausführliche Beschreibung

Bibliographische Detailangaben
Hauptverfasser: Riedl, K, Klock, T, Geldhauser, C, Fornasier, M
Format: Conference item
Sprache:English
Veröffentlicht: OpenReview 2024

Ähnliche Einträge