Summary: | In recent years the l[subscript 1],[subscript infinity] norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l[subscript 1] framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l[subscript 1],[subscript infinity] regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l[subscript 1],[subscript infinity] ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l[subscript 1],[subscript infinity] leads to better performance than both l[subscript 2] and l[subscript 1] regularization and that it is is effective in discovering jointly sparse solutions.
|