Distributed optimization of multi-class SVMs.

Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, whic...

Full description

Bibliographic Details
Main Authors: Maximilian Alber, Julian Zimmert, Urun Dogan, Marius Kloft
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2017-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC5453486?pdf=render