Guessing with Distributed Encoders

Two correlated sources emit a pair of sequences, each of which is observed by a different encoder. Each encoder produces a rate-limited description of the sequence it observes, and the two descriptions are presented to a guessing device that repeatedly produces sequence pairs until correct. The numb...

Full description

Bibliographic Details
Main Authors: Annina Bracher, Amos Lapidoth, Christoph Pfister
Format: Article
Language:English
Published: MDPI AG 2019-03-01
Series:Entropy
Subjects:
Online Access:http://www.mdpi.com/1099-4300/21/3/298
Description
Summary:Two correlated sources emit a pair of sequences, each of which is observed by a different encoder. Each encoder produces a rate-limited description of the sequence it observes, and the two descriptions are presented to a guessing device that repeatedly produces sequence pairs until correct. The number of guesses until correct is random, and it is required that it have a moment (of some prespecified order) that tends to one as the length of the sequences tends to infinity. The description rate pairs that allow this are characterized in terms of the Rényi entropy and the Arimoto–Rényi conditional entropy of the joint law of the sources. This solves the guessing analog of the Slepian–Wolf distributed source-coding problem. The achievability is based on random binning, which is analyzed using a technique by Rosenthal.
ISSN:1099-4300