User:Logicaldisk/sandbox

Shannon's main results on source coding and channel coding became known in the early 1950s, and it took only a few years for provably optimal source coding scheme (Huffman codes) to be discovered. The search for capacity-ac1hieving channel codes, on the other hand, took several decades until the discovery of polar codes by Erdal Arikan in 2007. Arikan proved that polar codes achieves the capacity of binary-input output-symmetric channels. Previously known codes such as LDPC codes and turbo codes are only known to come close to the capacity of such channels.

The main idea behind polar codes is to take multiple copies of a channel and synthesize a new set of channels, almost all of which are perfect or useless.

As an example, let us use the binary symmetric channel, which is characterized by the crossover probability p. If p=0, then we have a perfect channel and the optimal (capacity-achieving) strategy is to run transmit an uncoded sequence of uniform i.i.d bits. If p = 0.5, the channel outputs will be independent of the channel input resulting in zero capacity. This is a useless channel, and so an optimal strategy is to not use the channel at all.

"Mediocre" channels are more subtle. If p is approximately equal to 0.11, the capacity of the BSC is equal to 1 - H_b(p) = 0.500. According to the channel coding theorem, it is possible to reach arbitrarily low error probability using a sufficiently large blocklength. However, it does not tell us how long the blocklength should be, nor does it provide a practical way of encoding and decoding the message (see Module 4 exercise).