Learning poisson binomial distributions
We consider a basic problem in unsupervised learning: learning an unknown Poisson Binomial Distribution. A Poisson Binomial Distribution (PBD) over {0,1,...,n} is the distribution of a sum of n independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. Thes...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Association for Computing Machinery (ACM)
2012
|
Online Access: | http://hdl.handle.net/1721.1/72345 https://orcid.org/0000-0002-5451-0490 |
_version_ | 1811071455485493248 |
---|---|
author | Daskalakis, Constantinos Diakonikolas, Ilias Servedio, Rocco A. |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Daskalakis, Constantinos Diakonikolas, Ilias Servedio, Rocco A. |
author_sort | Daskalakis, Constantinos |
collection | MIT |
description | We consider a basic problem in unsupervised learning: learning an unknown Poisson Binomial Distribution. A Poisson Binomial Distribution (PBD) over {0,1,...,n} is the distribution of a sum of n independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 and are a natural n-parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal.
We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to ε-accuracy using O(1/ε[superscript 3]) samples independent of n. The running time of the algorithm is quasilinear in the size of its input data, i.e. ~O(log(n)/ε[superscript 3) bit-operations (observe that each draw from the distribution is a log(n)-bit string). This is nearly optimal since any algorithm must use Ω(1/ε[superscript 2) samples. We also give positive and negative results for some extensions of this learning problem. |
first_indexed | 2024-09-23T08:51:16Z |
format | Article |
id | mit-1721.1/72345 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T08:51:16Z |
publishDate | 2012 |
publisher | Association for Computing Machinery (ACM) |
record_format | dspace |
spelling | mit-1721.1/723452022-09-26T08:45:57Z Learning poisson binomial distributions Daskalakis, Constantinos Diakonikolas, Ilias Servedio, Rocco A. Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Daskalakis, Constantinos Daskalakis, Constantinos We consider a basic problem in unsupervised learning: learning an unknown Poisson Binomial Distribution. A Poisson Binomial Distribution (PBD) over {0,1,...,n} is the distribution of a sum of n independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 and are a natural n-parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal. We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to ε-accuracy using O(1/ε[superscript 3]) samples independent of n. The running time of the algorithm is quasilinear in the size of its input data, i.e. ~O(log(n)/ε[superscript 3) bit-operations (observe that each draw from the distribution is a log(n)-bit string). This is nearly optimal since any algorithm must use Ω(1/ε[superscript 2) samples. We also give positive and negative results for some extensions of this learning problem. National Science Foundation (U.S.). Career Award (CCF-0953960) National Science Foundation (U.S.). Career Award (CCF-1101491) Alfred P. Sloan Foundation. Fellowship 2012-08-27T16:22:20Z 2012-08-27T16:22:20Z 2012-05 Article http://purl.org/eprint/type/ConferencePaper 978-1-4503-1245-5 http://hdl.handle.net/1721.1/72345 Constantinos Daskalakis, Ilias Diakonikolas, and Rocco A. Servedio. 2012. Learning poisson binomial distributions. In Proceedings of the 44th symposium on Theory of Computing (STOC '12). ACM, New York, NY, USA, 709-728. https://orcid.org/0000-0002-5451-0490 en_US http://dx.doi.org/10.1145/2213977.2214042 Proceedings of the 44th symposium on Theory of Computing (STOC '12) Creative Commons Attribution-Noncommercial-Share Alike 3.0 http://creativecommons.org/licenses/by-nc-sa/3.0/ application/pdf Association for Computing Machinery (ACM) arXiv |
spellingShingle | Daskalakis, Constantinos Diakonikolas, Ilias Servedio, Rocco A. Learning poisson binomial distributions |
title | Learning poisson binomial distributions |
title_full | Learning poisson binomial distributions |
title_fullStr | Learning poisson binomial distributions |
title_full_unstemmed | Learning poisson binomial distributions |
title_short | Learning poisson binomial distributions |
title_sort | learning poisson binomial distributions |
url | http://hdl.handle.net/1721.1/72345 https://orcid.org/0000-0002-5451-0490 |
work_keys_str_mv | AT daskalakisconstantinos learningpoissonbinomialdistributions AT diakonikolasilias learningpoissonbinomialdistributions AT servedioroccoa learningpoissonbinomialdistributions |