Deep Frank-Wolfe for neural network optimization
Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms. The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm...
Egile Nagusiak: | , , |
---|---|
Formatua: | Internet publication |
Hizkuntza: | English |
Argitaratua: |
arXiv
2018
|
Search Result 1