Multi-task multi-sample learning
<p>In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables jo...
Prif Awduron: | , |
---|---|
Fformat: | Conference item |
Iaith: | English |
Cyhoeddwyd: |
Springer
2015
|
_version_ | 1826316078676967424 |
---|---|
author | Aytar, Y Zisserman, A |
author_facet | Aytar, Y Zisserman, A |
author_sort | Aytar, Y |
collection | OXFORD |
description | <p>In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables joint regularization of the E-SVMs without any additional cost over the original ensemble learning. The advantage of the MSL model is that the degree of sharing between positive samples can be controlled, such that the classification performance of either an ensemble of E-SVMs (sample independence) or a standard SVM (all positive samples used) is reproduced. However, between these two limits the model can exceed the performance of either. This MSL framework is inspired by multi-task learning approaches.</p>
<p>We also introduce a multi-task extension to MSL and develop a multi-task multi-sample learning (MTMSL) model that encourages both sharing between classes and sharing between sample specific classifiers within each class. Both MSL and MTMSL have convex objective functions.</p>
<p>The MSL and MTMSL models are evaluated on standard benchmarks including the MNIST, ‘Animals with attributes’ and the PASCAL VOC 2007 datasets. They achieve a significant performance improvement over both a standard SVM and an ensemble of E-SVMs.</p> |
first_indexed | 2024-12-09T03:37:28Z |
format | Conference item |
id | oxford-uuid:820a87eb-112a-4098-801e-d886d7947f17 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:37:28Z |
publishDate | 2015 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:820a87eb-112a-4098-801e-d886d7947f172024-12-03T15:36:02ZMulti-task multi-sample learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:820a87eb-112a-4098-801e-d886d7947f17EnglishSymplectic ElementsSpringer2015Aytar, YZisserman, A<p>In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables joint regularization of the E-SVMs without any additional cost over the original ensemble learning. The advantage of the MSL model is that the degree of sharing between positive samples can be controlled, such that the classification performance of either an ensemble of E-SVMs (sample independence) or a standard SVM (all positive samples used) is reproduced. However, between these two limits the model can exceed the performance of either. This MSL framework is inspired by multi-task learning approaches.</p> <p>We also introduce a multi-task extension to MSL and develop a multi-task multi-sample learning (MTMSL) model that encourages both sharing between classes and sharing between sample specific classifiers within each class. Both MSL and MTMSL have convex objective functions.</p> <p>The MSL and MTMSL models are evaluated on standard benchmarks including the MNIST, ‘Animals with attributes’ and the PASCAL VOC 2007 datasets. They achieve a significant performance improvement over both a standard SVM and an ensemble of E-SVMs.</p> |
spellingShingle | Aytar, Y Zisserman, A Multi-task multi-sample learning |
title | Multi-task multi-sample learning |
title_full | Multi-task multi-sample learning |
title_fullStr | Multi-task multi-sample learning |
title_full_unstemmed | Multi-task multi-sample learning |
title_short | Multi-task multi-sample learning |
title_sort | multi task multi sample learning |
work_keys_str_mv | AT aytary multitaskmultisamplelearning AT zissermana multitaskmultisamplelearning |