Simple, efficient, and neural algorithms for sparse coding
Sparse coding is a basic task in many fields including signal processing, neuroscience and machine learning where the goal is to learn a basis that enables a sparse representation of a given set of data, if one exists. Its standard formulation is as a non-convex optimization problem which is solved...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Published: |
Proceedings of Machine Learning Research
2018
|
Online Access: | http://hdl.handle.net/1721.1/115969 https://orcid.org/0000-0001-7047-0495 |
_version_ | 1826198721284538368 |
---|---|
author | Arora, Sanjeev Ge, Rong Ma, Tengyu Moitra, Ankur |
author2 | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
author_facet | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Arora, Sanjeev Ge, Rong Ma, Tengyu Moitra, Ankur |
author_sort | Arora, Sanjeev |
collection | MIT |
description | Sparse coding is a basic task in many fields including signal processing, neuroscience and machine learning where the goal is to learn a basis that enables a sparse representation of a given set of data, if one exists. Its standard formulation is as a non-convex optimization problem which is solved in practice by heuristics based on alternating minimization. Re- cent work has resulted in several algorithms for sparse coding with provable guarantees, but somewhat surprisingly these are outperformed by the simple alternating minimization heuristics. Here we give a general framework for understanding alternating minimization which we leverage to analyze existing heuristics and to design new ones also with provable guarantees. Some of these algorithms seem implementable on simple neural architectures, which was the original motivation of Olshausen and Field (1997a) in introducing sparse coding. We also give the first efficient algorithm for sparse coding that works almost up to the information theoretic limit for sparse recovery on incoherent dictionaries. All previous algorithms that approached or surpassed this limit run in time exponential in some natural parameter. Finally, our algorithms improve upon the sample complexity of existing approaches. We believe that our analysis framework will have applications in other settings where simple iterative algorithms are used. |
first_indexed | 2024-09-23T11:08:45Z |
format | Article |
id | mit-1721.1/115969 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T11:08:45Z |
publishDate | 2018 |
publisher | Proceedings of Machine Learning Research |
record_format | dspace |
spelling | mit-1721.1/1159692022-09-27T17:28:41Z Simple, efficient, and neural algorithms for sparse coding Arora, Sanjeev Ge, Rong Ma, Tengyu Moitra, Ankur Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Department of Mathematics Moitra, Ankur Sparse coding is a basic task in many fields including signal processing, neuroscience and machine learning where the goal is to learn a basis that enables a sparse representation of a given set of data, if one exists. Its standard formulation is as a non-convex optimization problem which is solved in practice by heuristics based on alternating minimization. Re- cent work has resulted in several algorithms for sparse coding with provable guarantees, but somewhat surprisingly these are outperformed by the simple alternating minimization heuristics. Here we give a general framework for understanding alternating minimization which we leverage to analyze existing heuristics and to design new ones also with provable guarantees. Some of these algorithms seem implementable on simple neural architectures, which was the original motivation of Olshausen and Field (1997a) in introducing sparse coding. We also give the first efficient algorithm for sparse coding that works almost up to the information theoretic limit for sparse recovery on incoherent dictionaries. All previous algorithms that approached or surpassed this limit run in time exponential in some natural parameter. Finally, our algorithms improve upon the sample complexity of existing approaches. We believe that our analysis framework will have applications in other settings where simple iterative algorithms are used. 2018-05-30T15:57:02Z 2018-05-30T15:57:02Z 2015 2018-05-29T15:09:44Z Article http://purl.org/eprint/type/ConferencePaper 1938-7228 http://hdl.handle.net/1721.1/115969 Arora, Sanjeev et al. "Simple, efficient, and neural algorithms for sparse coding." Proceedings of Machine Learning Research 40 (2015): 113-149 © 2015 The Authors https://orcid.org/0000-0001-7047-0495 http://proceedings.mlr.press/v40/ Proceedings of Machine Learning Research Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Proceedings of Machine Learning Research Journal of Machine Learning Research |
spellingShingle | Arora, Sanjeev Ge, Rong Ma, Tengyu Moitra, Ankur Simple, efficient, and neural algorithms for sparse coding |
title | Simple, efficient, and neural algorithms for sparse coding |
title_full | Simple, efficient, and neural algorithms for sparse coding |
title_fullStr | Simple, efficient, and neural algorithms for sparse coding |
title_full_unstemmed | Simple, efficient, and neural algorithms for sparse coding |
title_short | Simple, efficient, and neural algorithms for sparse coding |
title_sort | simple efficient and neural algorithms for sparse coding |
url | http://hdl.handle.net/1721.1/115969 https://orcid.org/0000-0001-7047-0495 |
work_keys_str_mv | AT arorasanjeev simpleefficientandneuralalgorithmsforsparsecoding AT gerong simpleefficientandneuralalgorithmsforsparsecoding AT matengyu simpleefficientandneuralalgorithmsforsparsecoding AT moitraankur simpleefficientandneuralalgorithmsforsparsecoding |