An invariant large margin nearest neighbour classifier
The k-nearest neighbour (kNN) rule is a simple and effective method for multi-way classification that is much used in Computer Vision. However, its performance depends heavily on the distance metric being employed. The recently proposed large margin nearest neighbour (LMNN) classifier [21] learns a...
Autori principali: | , , |
---|---|
Natura: | Conference item |
Lingua: | English |
Pubblicazione: |
IEEE
2007
|
Riassunto: | The k-nearest neighbour (kNN) rule is a simple and effective method for multi-way classification that is much used in Computer Vision. However, its performance depends heavily on the distance metric being employed. The recently proposed large margin nearest neighbour (LMNN) classifier [21] learns a distance metric for kNN classification and thereby improves its accuracy. Learning involves optimizing a convex problem using semidefinite programming (SDP). We extend the LMNN framework to incorporate knowledge about invariance of the data. The main contributions of our work are three fold: (i) Invariances to multivariate polynomial transformations are incorporated without explicitly adding more training data during learning - these can approximate common transformations such as rotations and affinities; (ii) the incorporation of different regularizes on the parameters being learnt; and (Hi) for all these variations, we show that the distance metric can still be obtained by solving a convex SDP problem. We call the resulting formulation invariant LMNN (lLMNN) classifier. We test our approach to learn a metric for matching (i) feature vectors from the standard Iris dataset; and (ii) faces obtained from TV video (an episode of 'Buffy the Vampire Slayer'). We compare our method with the state of the art classifiers and demonstrate improvements. |
---|