Rare Probability Estimation under Regularly Varying Heavy Tails

This paper studies the problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown, possibly infinite, discrete distribution. In particular, we study the multiplicative consistency of estimators, defined as the ratio of the estimate...

Full description

Bibliographic Details
Main Authors: Ohannessian, Mesrob I., Dahleh, Munther A.
Other Authors: Massachusetts Institute of Technology. Institute for Data, Systems, and Society
Format: Article
Language:en_US
Published: Journal of Machine Learning Research 2015
Online Access:http://hdl.handle.net/1721.1/99945
https://orcid.org/0000-0002-1470-2148
_version_ 1811086919160823808
author Ohannessian, Mesrob I.
Dahleh, Munther A.
author2 Massachusetts Institute of Technology. Institute for Data, Systems, and Society
author_facet Massachusetts Institute of Technology. Institute for Data, Systems, and Society
Ohannessian, Mesrob I.
Dahleh, Munther A.
author_sort Ohannessian, Mesrob I.
collection MIT
description This paper studies the problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown, possibly infinite, discrete distribution. In particular, we study the multiplicative consistency of estimators, defined as the ratio of the estimate to the true quantity converging to one. We first show that the classical Good-Turing estimator is not universally consistent in this sense, despite enjoying favorable additive properties. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly and have the absolute discounting structure of many heuristic algorithms. This also establishes a discrete parallel to extreme value theory, and many of the techniques therein can be adapted to the framework that we set forth.
first_indexed 2024-09-23T13:36:56Z
format Article
id mit-1721.1/99945
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:36:56Z
publishDate 2015
publisher Journal of Machine Learning Research
record_format dspace
spelling mit-1721.1/999452022-10-01T16:01:28Z Rare Probability Estimation under Regularly Varying Heavy Tails Ohannessian, Mesrob I. Dahleh, Munther A. Massachusetts Institute of Technology. Institute for Data, Systems, and Society Massachusetts Institute of Technology. Laboratory for Information and Decision Systems Ohannessian, Mesrob I. Dahleh, Munther A. This paper studies the problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown, possibly infinite, discrete distribution. In particular, we study the multiplicative consistency of estimators, defined as the ratio of the estimate to the true quantity converging to one. We first show that the classical Good-Turing estimator is not universally consistent in this sense, despite enjoying favorable additive properties. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly and have the absolute discounting structure of many heuristic algorithms. This also establishes a discrete parallel to extreme value theory, and many of the techniques therein can be adapted to the framework that we set forth. National Science Foundation (U.S.) (Grant 6922470) United States. Office of Naval Research (Grant 6918937) 2015-11-20T14:09:57Z 2015-11-20T14:09:57Z 2012 Article http://purl.org/eprint/type/ConferencePaper 1938-7228 http://hdl.handle.net/1721.1/99945 Ohannessian, Mesrob I., and Munther A. Dahleh. "Rare Probability Estimation under Regularly Varying Heavy Tails." Journal of Machine Learning Research: Workshop and Conference Proceedings 23 (2012), 21.1-21.24. https://orcid.org/0000-0002-1470-2148 en_US http://jmlr.org/proceedings/papers/v23/ohannessian12.html Journal of Machine Learning Research: Workshop and Conference Proceedings Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Journal of Machine Learning Research MIT web domain
spellingShingle Ohannessian, Mesrob I.
Dahleh, Munther A.
Rare Probability Estimation under Regularly Varying Heavy Tails
title Rare Probability Estimation under Regularly Varying Heavy Tails
title_full Rare Probability Estimation under Regularly Varying Heavy Tails
title_fullStr Rare Probability Estimation under Regularly Varying Heavy Tails
title_full_unstemmed Rare Probability Estimation under Regularly Varying Heavy Tails
title_short Rare Probability Estimation under Regularly Varying Heavy Tails
title_sort rare probability estimation under regularly varying heavy tails
url http://hdl.handle.net/1721.1/99945
https://orcid.org/0000-0002-1470-2148
work_keys_str_mv AT ohannessianmesrobi rareprobabilityestimationunderregularlyvaryingheavytails
AT dahlehmunthera rareprobabilityestimationunderregularlyvaryingheavytails