Support Vector Machines: Training and Applications

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifier...

Full description

Bibliographic Details
Main Authors: Osuna, Edgar, Freund, Robert, Girosi, Federico
Language:en_US
Published: 2004
Subjects:
Online Access:http://hdl.handle.net/1721.1/7290
_version_ 1811075231221022720
author Osuna, Edgar
Freund, Robert
Girosi, Federico
author_facet Osuna, Edgar
Freund, Robert
Girosi, Federico
author_sort Osuna, Edgar
collection MIT
description The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
first_indexed 2024-09-23T10:02:49Z
id mit-1721.1/7290
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T10:02:49Z
publishDate 2004
record_format dspace
spelling mit-1721.1/72902019-04-12T08:34:37Z Support Vector Machines: Training and Applications Osuna, Edgar Freund, Robert Girosi, Federico AI MIT Artificial Intelligence Patter recognition Support Vector Machine Classification Detection The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images. 2004-10-22T20:17:54Z 2004-10-22T20:17:54Z 1997-03-01 AIM-1602 CBCL-144 http://hdl.handle.net/1721.1/7290 en_US AIM-1602 CBCL-144 38 p. 6171554 bytes 2896170 bytes application/postscript application/pdf application/postscript application/pdf
spellingShingle AI
MIT
Artificial Intelligence
Patter recognition
Support Vector Machine
Classification
Detection
Osuna, Edgar
Freund, Robert
Girosi, Federico
Support Vector Machines: Training and Applications
title Support Vector Machines: Training and Applications
title_full Support Vector Machines: Training and Applications
title_fullStr Support Vector Machines: Training and Applications
title_full_unstemmed Support Vector Machines: Training and Applications
title_short Support Vector Machines: Training and Applications
title_sort support vector machines training and applications
topic AI
MIT
Artificial Intelligence
Patter recognition
Support Vector Machine
Classification
Detection
url http://hdl.handle.net/1721.1/7290
work_keys_str_mv AT osunaedgar supportvectormachinestrainingandapplications
AT freundrobert supportvectormachinestrainingandapplications
AT girosifederico supportvectormachinestrainingandapplications