Efficient discriminative learning of parametric nearest neighbor classifiers

Linear SVMs are efficient in both training and testing, however the data in real applications is rarely linearly separable. Non-linear kernel SVMs are too computationally intensive for applications with large-scale data sets. Recently locally linear classifiers have gained popularity due to their ef...

Full description

Bibliographic Details
Main Authors: Zhang, Z, Sturgess, P, Sengupta, S, Crook, N, Torr, PHS
Format: Conference item
Language:English
Published: IEEE 2012
_version_ 1811141049983172608
author Zhang, Z
Sturgess, P
Sengupta, S
Crook, N
Torr, PHS
author_facet Zhang, Z
Sturgess, P
Sengupta, S
Crook, N
Torr, PHS
author_sort Zhang, Z
collection OXFORD
description Linear SVMs are efficient in both training and testing, however the data in real applications is rarely linearly separable. Non-linear kernel SVMs are too computationally intensive for applications with large-scale data sets. Recently locally linear classifiers have gained popularity due to their efficiency whilst remaining competitive with kernel methods. The vanilla nearest neighbor algorithm is one of the simplest locally linear classifiers, but it lacks robustness due to the noise often present in real-world data. In this paper, we introduce a novel local classifier, Parametric Nearest Neighbor (P-NN) and its extension Ensemble of P-NN (EP-NN). We parameterize the nearest neighbor algorithm based on the minimum weighted squared Euclidean distances between the data points and the prototypes, where a prototype is represented by a locally linear combination of some data points. Meanwhile, our method attempts to jointly learn both the prototypes and the classifier parameters discriminatively via max-margin. This makes our classifiers suitable to approximate the classification decision boundaries locally based on nonlinear functions. During testing, the computational complexity of both classifiers is linear in the product of the dimension of data and the number of prototypes. Our classification results on MNIST, USPS, LETTER, and Chars 74K are comparable and in some cases are better than many other methods such as the state-of-the-art locally linear classifiers.
first_indexed 2024-09-25T04:31:42Z
format Conference item
id oxford-uuid:52e2b4a5-51fe-486c-a049-c84733de1e28
institution University of Oxford
language English
last_indexed 2024-09-25T04:31:42Z
publishDate 2012
publisher IEEE
record_format dspace
spelling oxford-uuid:52e2b4a5-51fe-486c-a049-c84733de1e282024-09-03T17:09:55ZEfficient discriminative learning of parametric nearest neighbor classifiersConference itemhttp://purl.org/coar/resource_type/c_5794uuid:52e2b4a5-51fe-486c-a049-c84733de1e28EnglishSymplectic ElementsIEEE2012Zhang, ZSturgess, PSengupta, SCrook, NTorr, PHSLinear SVMs are efficient in both training and testing, however the data in real applications is rarely linearly separable. Non-linear kernel SVMs are too computationally intensive for applications with large-scale data sets. Recently locally linear classifiers have gained popularity due to their efficiency whilst remaining competitive with kernel methods. The vanilla nearest neighbor algorithm is one of the simplest locally linear classifiers, but it lacks robustness due to the noise often present in real-world data. In this paper, we introduce a novel local classifier, Parametric Nearest Neighbor (P-NN) and its extension Ensemble of P-NN (EP-NN). We parameterize the nearest neighbor algorithm based on the minimum weighted squared Euclidean distances between the data points and the prototypes, where a prototype is represented by a locally linear combination of some data points. Meanwhile, our method attempts to jointly learn both the prototypes and the classifier parameters discriminatively via max-margin. This makes our classifiers suitable to approximate the classification decision boundaries locally based on nonlinear functions. During testing, the computational complexity of both classifiers is linear in the product of the dimension of data and the number of prototypes. Our classification results on MNIST, USPS, LETTER, and Chars 74K are comparable and in some cases are better than many other methods such as the state-of-the-art locally linear classifiers.
spellingShingle Zhang, Z
Sturgess, P
Sengupta, S
Crook, N
Torr, PHS
Efficient discriminative learning of parametric nearest neighbor classifiers
title Efficient discriminative learning of parametric nearest neighbor classifiers
title_full Efficient discriminative learning of parametric nearest neighbor classifiers
title_fullStr Efficient discriminative learning of parametric nearest neighbor classifiers
title_full_unstemmed Efficient discriminative learning of parametric nearest neighbor classifiers
title_short Efficient discriminative learning of parametric nearest neighbor classifiers
title_sort efficient discriminative learning of parametric nearest neighbor classifiers
work_keys_str_mv AT zhangz efficientdiscriminativelearningofparametricnearestneighborclassifiers
AT sturgessp efficientdiscriminativelearningofparametricnearestneighborclassifiers
AT senguptas efficientdiscriminativelearningofparametricnearestneighborclassifiers
AT crookn efficientdiscriminativelearningofparametricnearestneighborclassifiers
AT torrphs efficientdiscriminativelearningofparametricnearestneighborclassifiers