An efficient projection for l1,∞ regularization

In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this p...

Full description

Bibliographic Details
Main Authors: Quattoni, Ariadna, Carreras Perez, Xavier, Collins, Michael
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Association for Computing Machinery 2010
Subjects:
Online Access:http://hdl.handle.net/1721.1/59367
Description
Summary:In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l[subscript 1],[subscript infinity] regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l[subscript 1],[subscript infinity] ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l[subscript 1],[subscript infinity] leads to better performance than both l[subscript 2] and l[subscript 1] regularization and that it is is effective in discovering jointly sparse solutions.