Deep Frank-Wolfe for neural network optimization

Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms. The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm...

Повний опис

Бібліографічні деталі
Автори: Berrada, L, Zisserman, A, Kumar, MP
Формат: Internet publication
Мова:English
Опубліковано: arXiv 2018
Search Result 1

Deep Frank-Wolfe for neural network optimization за авторством Berrada, L, Zisserman, A, Mudigonda, P

Опубліковано 2019
Conference item