Bounds between contraction coefficients

In this paper, we delineate how the contraction coefficient of the strong data processing inequality for KL divergence can be used to learn likelihood models. We then present an alternative formulation that forces the input KL divergence to vanish, and achieves a contraction coefficient equivalent t...

Full description

Bibliographic Details
Main Authors: Makur, Anuran, Zheng, Lizhong
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2017
Online Access:http://hdl.handle.net/1721.1/112984
https://orcid.org/0000-0002-2978-8116
https://orcid.org/0000-0002-6108-0222
Description
Summary:In this paper, we delineate how the contraction coefficient of the strong data processing inequality for KL divergence can be used to learn likelihood models. We then present an alternative formulation that forces the input KL divergence to vanish, and achieves a contraction coefficient equivalent to the squared maximal correlation using a linear algebraic solution. To analyze the performance loss in using this simple but suboptimal procedure, we bound these coefficients in the discrete and finite regime, and prove their equivalence in the Gaussian regime.