Regularization Through Feature Knock Out

In this paper, we present and analyze a novel regularization technique based on enhancing our dataset with corrupted copies of the original data. The motivation is that since the learning algorithm lacks information about which parts of thedata are reliable, it has to produce more robust classificat...

Full description

Bibliographic Details
Main Authors: Wolf, Lior, Martin, Ian
Language:en_US
Published: 2005
Subjects:
AI
Online Access:http://hdl.handle.net/1721.1/30502
_version_ 1826216783011381248
author Wolf, Lior
Martin, Ian
author_facet Wolf, Lior
Martin, Ian
author_sort Wolf, Lior
collection MIT
description In this paper, we present and analyze a novel regularization technique based on enhancing our dataset with corrupted copies of the original data. The motivation is that since the learning algorithm lacks information about which parts of thedata are reliable, it has to produce more robust classification functions. We then demonstrate how this regularization leads to redundancy in the resulting classifiers, which is somewhat in contrast to the common interpretations of the Occam’s razor principle. Using this framework, we propose a simple addition to the gentle boosting algorithm which enables it to work with only a few examples. We test this new algorithm on a variety of datasets and show convincing results.
first_indexed 2024-09-23T16:53:21Z
id mit-1721.1/30502
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T16:53:21Z
publishDate 2005
record_format dspace
spelling mit-1721.1/305022019-04-12T08:37:51Z Regularization Through Feature Knock Out Wolf, Lior Martin, Ian AI In this paper, we present and analyze a novel regularization technique based on enhancing our dataset with corrupted copies of the original data. The motivation is that since the learning algorithm lacks information about which parts of thedata are reliable, it has to produce more robust classification functions. We then demonstrate how this regularization leads to redundancy in the resulting classifiers, which is somewhat in contrast to the common interpretations of the Occam’s razor principle. Using this framework, we propose a simple addition to the gentle boosting algorithm which enables it to work with only a few examples. We test this new algorithm on a variety of datasets and show convincing results. 2005-12-22T02:15:29Z 2005-12-22T02:15:29Z 2004-11-12 MIT-CSAIL-TR-2004-072 AIM-2004-025 CBCL-242 http://hdl.handle.net/1721.1/30502 en_US Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory 0 p. 16224097 bytes 656543 bytes application/postscript application/pdf application/postscript application/pdf
spellingShingle AI
Wolf, Lior
Martin, Ian
Regularization Through Feature Knock Out
title Regularization Through Feature Knock Out
title_full Regularization Through Feature Knock Out
title_fullStr Regularization Through Feature Knock Out
title_full_unstemmed Regularization Through Feature Knock Out
title_short Regularization Through Feature Knock Out
title_sort regularization through feature knock out
topic AI
url http://hdl.handle.net/1721.1/30502
work_keys_str_mv AT wolflior regularizationthroughfeatureknockout
AT martinian regularizationthroughfeatureknockout