Message-Passing Algorithms and Improved LP Decoding

Linear programming (LP) decoding for low-density parity-check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performancecoming close to that of iterative decoding algorithmsand its amenability to finite-blocklength a...

Full description

Bibliographic Details
Main Authors: Arora, Sanjeev, Daskalakis, Constantinos, Steurer, David
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/134246
Description
Summary:Linear programming (LP) decoding for low-density parity-check codes (and related domains such as compressed sensing) has received increased attention over recent years because of its practical performancecoming close to that of iterative decoding algorithmsand its amenability to finite-blocklength analysis. Several works starting with the work of Feldman showed how to analyze LP decoding using properties of expander graphs. This line of analysis works for only low error rates, about a couple of orders of magnitude lower than the empirically observed performance. It is possible to do better for the case of random noise, as shown by Daskalakis and Koetter and Vontobel. Building on work of Koetter and Vontobel, we obtain a novel understanding of LP decoding, which allows us to establish a 0.05 fraction of correctable errors for rate- codes; this comes very close to the performance of iterative decoders and is significantly higher than the best previously noted correctable bit error rate for LP decoding. Our analysis exploits an explicit connection between LP decoding and message-passing algorithms and, unlike other techniques, directly works with the primal linear program. An interesting byproduct of our method is a notion of a locally optimal solution that we show to always be globally optimal (i.e., it is the nearest codeword). Such a solution can in fact be found in near-linear time by a reweighted version of the min-sum algorithm, obviating the need for LP. Our analysis implies, in particular, that this reweighted version of the min-sum decoder corrects up to a 0.05 fraction of errors. © 1963-2012 IEEE.