Jump to content

List decoding

From Wikipedia, the free encyclopedia

Incoding theory,list decodingis an alternative to unique decoding oferror-correcting codesfor large error rates. The notion was proposed byEliasin the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.

The unique decoding model incoding theory,which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance forstochasticnoise models (proposed byShannon) and the adversarial noise model (considered byRichard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder outputs a list of codewords for worst-case pathological error patterns where the actual transmitted codeword is included in the output list. In case of typical error patterns though, the decoder outputs a unique single codeword, given a received word, which is almost always the case (However, this is not known to be true for all codes). The improvement here is significant in that the error-correction performance doubles. This is because now the decoder is not confined by the half-the-minimum distance barrier. This model is very appealing because having a list of codewords is certainly better than just giving up. The notion of list-decoding has many interesting applications incomplexity theory.

The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible. There are two main schools of thought in modeling the channel behavior:

  • Probabilistic noise model studied by Shannon in which the channel noise is modeled precisely in the sense that the probabilistic behavior of the channel is well known and the probability of occurrence of too many or too few errors is low
  • Worst-case or adversarial noise model considered by Hamming in which the channel acts as an adversary that arbitrarily corrupts the codeword subject to a bound on the total number of errors.

The highlight of list-decoding is that even under adversarial noise conditions, it is possible to achieve the information-theoretic optimal trade-off between rate and fraction of errors that can be corrected. Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model.

Mathematical formulation

[edit]

Letbe aerror-correcting code; in other words,is a code of length,dimensionand minimum distanceover an alphabetof size.The list-decoding problem can now be formulated as follows:

Input:Received word,error bound

Output:A list of all codewordswhosehamming distancefromis at most.

Motivation for list decoding

[edit]

Given a received word,which is a noisy version of some transmitted codeword,the decoder tries to output the transmitted codeword by placing its bet on a codeword that is “nearest” to the received word. The Hamming distance between two codewords is used as a metric in finding the nearest codeword, given the received word by the decoder. Ifis the minimum Hamming distance of a code,then there exists two codewordsandthat differ in exactlypositions. Now, in the case where the received wordis equidistant from the codewordsand,unambiguous decoding becomes impossible as the decoder cannot decide which one ofandto output as the original transmitted codeword. As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding. However, received words such asconsidered above occur only in the worst-case and if one looks at the way Hamming balls are packed in high-dimensional space, even for error patternsbeyond half-the minimum distance, there is only a single codewordwithin Hamming distancefrom the received word. This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case ofReed–Solomon codeswhich is well studied and quite ubiquitous in the real world applications. In fact, Shannon's proof of the capacity theorem forq-ary symmetric channels can be viewed in light of the above claim for random codes.

Under the mandate of list-decoding, for worst-case errors, the decoder is allowed to output a small list of codewords. With some context specific or side information, it may be possible to prune the list and recover the original transmitted codeword. Hence, in general, this seems to be a stronger error-recovery model than unique decoding.

List-decoding potential

[edit]

For a polynomial-time list-decoding algorithm to exist, we need the combinatorial guarantee that any Hamming ball of radiusaround a received word(whereis the fraction of errors in terms of the block length) has a small number of codewords. This is because the list size itself is clearly a lower bound on the running time of the algorithm. Hence, we require the list size to be a polynomial in the block lengthof the code. A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code. List decoding promises to meet this upper bound. It has been shown non-constructively that codes of rateexist that can be list decoded up to a fraction of errors approaching.The quantityis referred to in the literature as the list-decoding capacity. This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors. Naturally, we need to have at least a fractionof the transmitted symbols to be correct in order to recover the message. This is an information-theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding, we can potentially achieve this information-theoretic limit. However, to realize this potential, we need explicit codes (codes that can be constructed in polynomial time) and efficient algorithms to perform encoding and decoding.

(p,L)-list-decodability

[edit]

For any error fractionand an integer,a codeis said to be list decodable up to a fractionof errors with list size at mostor-list-decodable if for every,the number of codewordswithin Hamming distancefromis at most

Combinatorics of list decoding

[edit]

The relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied. It has been shown that every code can be list decoded using small lists beyond half the minimum distance up to a bound called the Johnson radius. This is quite significant because it proves the existence of-list-decodable codes of good rate with a list-decoding radius much larger thanIn other words, theJohnson boundrules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater thanwhich means that it is possible to correct far more errors with list decoding.

List-decoding capacity

[edit]
Theorem (List-Decoding Capacity).LetandThe following two statements hold for large enough block length.
i) If,then there exists a-list decodable code.
ii) If,then every-list-decodable code has.
Where
is the-ary entropy function defined forand extended by continuity to

What this means is that for rates approaching the channel capacity, there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity, the list size becomes exponential which rules out the existence of efficient decoding algorithms.

The proof for list-decoding capacity is a significant one in that it exactly matches the capacity of a-ary symmetric channel.In fact, the term "list-decoding capacity" should actually be read as the capacity of an adversarial channel under list decoding. Also, the proof for list-decoding capacity is an important result that pin points the optimal trade-off between rate of a code and the fraction of errors that can be corrected under list decoding.

Sketch of proof

[edit]

The idea behind the proof is similar to that of Shannon's proof for capacity of thebinary symmetric channelwhere a random code is picked and showing that it is-list-decodable with high probability as long as the rateFor rates exceeding the above quantity, it can be shown that the list sizebecomes super-polynomially large.

A "bad" event is defined as one in which, given a received wordandmessagesit so happens that,for everywhereis the fraction of errors that we wish to correct andis the Hamming ball of radiuswith the received wordas the center.

Now, the probability that a codewordassociated with a fixed messagelies in a Hamming ballis given by

where the quantityis the volume of a Hamming ball of radiuswith the received wordas the center. The inequality in the above relation follows from the upper bound on the volume of a Hamming ball. The quantitygives a very good estimate on the volume of a Hamming ball of radiuscentered on any word inPut another way, the volume of a Hamming ball is translation invariant. To continue with the proof sketch, we conjure theunion boundin probability theory which tells us that the probability of a bad event happening for a givenis upper bounded by the quantity.

With the above in mind, the probability of "any" bad event happening can be shown to be less than.To show this, we work our way over all possible received wordsand every possible subset ofmessages in

Now turning to the proof of part (ii), we need to show that there are super-polynomially many codewords around everywhen the rate exceeds the list-decoding capacity. We need to show thatis super-polynomially large if the rate.Fix a codeword.Now, for everypicked at random, we have

since Hamming balls are translation invariant. From the definition of the volume of a Hamming ball and the fact thatis chosen uniformly at random fromwe also have

Let us now define an indicator variablesuch that

Taking the expectation of the volume of a Hamming ball we have

Therefore, by the probabilistic method, we have shown that if the rate exceeds the list-decoding capacity, then the list size becomes super-polynomially large. This completes the proof sketch for the list-decoding capacity.

List decodability ofReed-Solomon Codes

[edit]

In 2023, building upon three seminal works,[1][2][3]coding theorists showed that, with high probability,Reed-Solomon codesdefined over random evaluation points are list decodable up to the list-decoding capacity over linear size alphabets.

List-decoding algorithms

[edit]

In the period from 1995 to 2007, the coding theory community developed progressively more efficient list-decoding algorithms. Algorithms forReed–Solomon codesthat can decode up to the Johnson radius which isexist whereis the normalised distance or relative distance. However, for Reed-Solomon codes,which means a fractionof errors can be corrected. Some of the most prominent list-decoding algorithms are the following:

  • Sudan '95 – The first known non-trivial list-decoding algorithm for Reed–Solomon codes that achieved efficient list decoding up toerrors developed byMadhu Sudan.
  • Guruswami–Sudan '98– An improvement on the above algorithm for list decoding Reed–Solomon codes up toerrors by Madhu Sudan and his then doctoral studentVenkatesan Guruswami.
  • Parvaresh–Vardy '05 – In a breakthrough paper, Farzad Parvaresh andAlexander Vardypresented codes that can be list decoded beyond theradius for low rates.Their codes are variants of Reed-Solomon codes which are obtained by evaluatingcorrelated polynomials instead of justas in the case of usual Reed-Solomon codes.
  • Guruswami–Rudra '06 - In yet another breakthrough, Venkatesan Guruswami andAtri Rudragive explicit codes that achieve list-decoding capacity, that is, they can be list decoded up to the radiusfor any.In other words, this is error-correction with optimal redundancy. This answered a question that had been open for about 50 years. This work has been invited to the Research Highlights section of the Communications of the ACM (which is “devoted to the most important research results published in Computer Science in recent years” ) and was mentioned in an article titled “Coding and Computing Join Forces” in the Sep 21, 2007 issue of the Science magazine. The codes that they are given are calledfolded Reed-Solomon codeswhich are nothing but plain Reed-Solomon codes but viewed as a code over a larger alphabet by careful bundling of codeword symbols.

Because of their ubiquity and the nice algebraic properties they possess, list-decoding algorithms for Reed–Solomon codes were a main focus of researchers. The list-decoding problem for Reed–Solomon codes can be formulated as follows:

Input:For anReed-Solomon code, we are given the pairfor,whereis theth bit of the received word and the's are distinct points in the finite fieldand an error parameter.

Output:The goal is to find all the polynomialsof degree at mostwhich is the message length such thatfor at leastvalues of.Here, we would like to haveas small as possible so that greater number of errors can be tolerated.

With the above formulation, the general structure of list-decoding algorithms for Reed-Solomon codes is as follows:

Step 1:(Interpolation) Find a non-zero bivariate polynomialsuch thatfor.

Step 2:(Root finding/Factorization) Output all degreepolynomialssuch thatis a factor ofi.e..For each of these polynomials, check iffor at leastvalues of.If so, include such a polynomialin the output list.

Given the fact that bivariate polynomials can be factored efficiently, the above algorithm runs in polynomial time.

Applications in complexity theory and cryptography

[edit]

Algorithms developed for list decoding of several interesting code families have found interesting applications incomputational complexityand the field ofcryptography.Following is a sample list of applications outside of coding theory:

References

[edit]
  1. ^Brakensiek, Joshua; Gopi, Sivakanth; Makam, Visu (2023-06-02)."Generic Reed-Solomon Codes Achieve List-Decoding Capacity".Proceedings of the 55th Annual ACM Symposium on Theory of Computing.STOC 2023. New York, NY, USA: Association for Computing Machinery. pp. 1488–1501.arXiv:2206.05256.doi:10.1145/3564246.3585128.ISBN978-1-4503-9913-5.
  2. ^Guo, Zeyu; Zhang, Zihan (2023-11-06)."Randomly Punctured Reed-Solomon Codes Achieve the List Decoding Capacity over Polynomial-Size Alphabets".2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS).FOCS 2023, Santa Cruz, CA, USA. IEEE. pp. 164–176.arXiv:2304.01403.doi:10.1109/FOCS57990.2023.00019.ISBN979-8-3503-1894-4.
  3. ^Alrabiah, Omar; Guruswami, Venkatesan; Li, Ray (2023). "Randomly punctured Reed--Solomon codes achieve list-decoding capacity over linear-sized fields".arXiv:2304.09445[cs.IT].
[edit]