Binary symmetric channel

A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "flipped" with a "crossover probability" of p, and otherwise is received correctly. This model can be applied to varied communication channels such as telephone lines or disk drive storage.

The noisy-channel coding theorem applies to BSCp, saying that information can be transmitted at any rate up to the channel capacity with arbitrarily low error. The channel capacity is $$1 - \operatorname H_\text{b}(p)$$ bits, where $$\operatorname H_\text{b}$$ is the binary entropy function. Codes including Forney's code have been designed to transmit information efficiently across the channel.

Definition
A binary symmetric channel with crossover probability $$p$$, denoted by BSCp, is a channel with binary input and binary output and probability of error $$p$$. That is, if $$X$$ is the transmitted random variable and $$Y$$ the received variable, then the channel is characterized by the conditional probabilities:


 * $$\begin{align}

\operatorname {Pr} [ Y = 0 | X = 0 ] &= 1 - p \\ \operatorname {Pr} [ Y = 0 | X = 1 ] &= p \\ \operatorname {Pr} [ Y = 1 | X = 0 ] &= p \\ \operatorname {Pr} [ Y = 1 | X = 1 ] &= 1 - p \end{align}$$

It is assumed that $$0 \le p \le 1/2$$. If $$p > 1/2$$, then the receiver can swap the output (interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel with crossover probability $$1 - p \le 1/2$$.

Capacity


The channel capacity of the binary symmetric channel, in bits, is:
 * $$\ C_{\text{BSC}} = 1 - \operatorname H_\text{b}(p), $$

where $$\operatorname H_\text{b}(p)$$ is the binary entropy function, defined by:
 * $$\operatorname H_\text{b}(x)=x\log_2\frac{1}{x}+(1-x)\log_2\frac{1}{1-x}$$


 * {| class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Proof
 * The capacity is defined as the maximum mutual information between input and output for all possible input distributions $$p_X(x)$$:
 * The capacity is defined as the maximum mutual information between input and output for all possible input distributions $$p_X(x)$$:


 * $$ C = \max_{p_X(x)} \left \{\, I(X;Y)\, \right \} $$

The mutual information can be reformulated as


 * $$\begin{align}

I(X;Y) &= H(Y) - H(Y|X) \\ &= H(Y) - \sum_{x \in \{0,1\} }{p_X(x) H(Y|X=x)} \\ &= H(Y) - \sum_{x \in \{0,1\} }{p_X(x)} \operatorname H_\text{b}(p) \\ &= H(Y) - \operatorname H_\text{b}(p), \end{align}$$ where the first and second step follows from the definition of mutual information and conditional entropy respectively. The entropy at the output for a given and fixed input symbol ($$H(Y|X=x)$$) equals the binary entropy function, which leads to the third line and this can be further simplified.

In the last line, only the first term $$H(Y)$$ depends on the input distribution $$p_X(x)$$. The entropy of a binary variable is at most 1 bit, and equality is attained if its probability distribution is uniform. It therefore suffices to exhibit an input distribution that yields a uniform probability distribution for the output $$Y$$. For this, note that it is a property of any binary symmetric channel that a uniform probability distribution of the input results in a uniform probability distribution of the output. Hence the value $$H(Y)$$ will be 1 when we choose a uniform distribution for $$p_X(x)$$. We conclude that the channel capacity for our binary symmetric channel is $$C_{\text{BSC}}=1-\operatorname H_\text{b}(p)$$.
 * }

Noisy-channel coding theorem
Shannon's noisy-channel coding theorem gives a result about the rate of information that can be transmitted through a communication channel with arbitrarily low error. We study the particular case of $$\text{BSC}_p$$.

The noise $$e$$ that characterizes $$\text{BSC}_{p}$$ is a random variable consisting of n independent random bits (n is defined below) where each random bit is a $$1$$ with probability $$p$$ and a $$0$$ with probability $$1-p$$. We indicate this by writing "$$e \in \text{BSC}_{p}$$".

What this theorem actually implies is, a message when picked from $$\{0,1\}^k$$, encoded with a random encoding function $$E$$, and sent across a noisy $$\text{BSC}_{p}$$, there is a very high probability of recovering the original message by decoding, if $$k$$ or in effect the rate of the channel is bounded by the quantity stated in the theorem. The decoding error probability is exponentially small.

Proof
The theorem can be proved directly with a probabilistic method. Consider an encoding function $$E: \{0,1\}^k \to \{0,1\}^n$$ that is selected at random. This means that for each message $$m \in \{0,1\}^k$$, the value $$E(m) \in \{0,1\}^n$$ is selected at random (with equal probabilities). For a given encoding function $$E$$, the decoding function $$D:\{0,1\}^n \to \{0,1\}^k$$ is specified as follows: given any received codeword $$y \in \{0,1\}^n$$, we find the message $$m\in\{0,1\}^{k}$$ such that the Hamming distance $$\Delta(y, E(m))$$ is as small as possible (with ties broken arbitrarily). ($$D$$ is called a maximum likelihood decoding function.)

The proof continues by showing that at least one such choice $$(E,D)$$ satisfies the conclusion of theorem, by integration over the probabilities. Suppose $$p$$ and $$\epsilon$$ are fixed. First we show that, for a fixed $$m \in \{0,1\}^{k}$$ and $$E$$ chosen randomly, the probability of failure over $$\text{BSC}_p$$ noise is exponentially small in n. At this point, the proof works for a fixed message $$m$$. Next we extend this result to work for all messages $$m$$. We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name random coding with expurgation.


 * {| class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Continuation of proof (sketch)
 * Fix $$p$$ and $$\epsilon$$. Given a fixed message $$m \in \{0,1\}^{k}$$, we need to estimate the expected value of the probability of the received codeword along with the noise does not give back $$m$$ on decoding. That is to say, we need to estimate:
 * Fix $$p$$ and $$\epsilon$$. Given a fixed message $$m \in \{0,1\}^{k}$$, we need to estimate the expected value of the probability of the received codeword along with the noise does not give back $$m$$ on decoding. That is to say, we need to estimate:


 * $$\mathbb{E}_{E} \left [\Pr_{e \in \text{BSC}_p}[D(E(m) + e) \neq m] \right ].$$

Let $$y$$ be the received codeword. In order for the decoded codeword $$D(y)$$ not to be equal to the message $$m$$, one of the following events must occur:


 * $$y$$ does not lie within the Hamming ball of radius $$(p+\epsilon)n$$ centered at $$E(m)$$. This condition is mainly used to make the calculations easier.
 * There is another message $$m' \in \{0,1\}^k$$ such that $$\Delta(y, E(m')) \leqslant \Delta(y, E(m))$$. In other words, the errors due to noise take the transmitted codeword closer to another encoded message.

We can apply the Chernoff bound to ensure the non occurrence of the first event; we get:


 * $$Pr_{e \in \text{BSC}_p} [\Delta(y, E(m)) > (p+\epsilon)n] \leqslant 2^{-{\epsilon^2}n}.$$

This is exponentially small for large $$n$$ (recall that $$\epsilon$$ is fixed).

For the second event, we note that the probability that $$E(m') \in B(y,(p+\epsilon)n)$$ is $$\text{Vol}(B(y,(p+\epsilon)n)/2^n$$ where $$B(x, r)$$ is the Hamming ball of radius $$r$$ centered at vector $$x$$ and $$\text{Vol}(B(x, r))$$ is its volume. Using approximation to estimate the number of codewords in the Hamming ball, we have $$\text{Vol}(B(y,(p+\epsilon)n)) \approx 2^{H(p)n}$$. Hence the above probability amounts to $$2^{H(p)n}/2^n = 2^{H(p)n-n}$$. Now using the union bound, we can upper bound the existence of such an $$m' \in \{0,1\}^k$$ by $$\le 2^{k +H(p)n-n}$$ which is $$2^{-\Omega(n)}$$, as desired by the choice of $$k$$.
 * }


 * {| class="toccolours collapsible collapsed" width="80%" style="text-align:left"

!Continuation of proof (detailed)
 * From the above analysis, we calculate the probability of the event that the decoded codeword plus the channel noise is not the same as the original message sent. We shall introduce some symbols here. Let $$p(y|E(m))$$ denote the probability of receiving codeword $$y$$ given that codeword $$E(m)$$ was sent. Let $$B_0$$ denote $$B(E(m),(p+\epsilon)n).$$
 * From the above analysis, we calculate the probability of the event that the decoded codeword plus the channel noise is not the same as the original message sent. We shall introduce some symbols here. Let $$p(y|E(m))$$ denote the probability of receiving codeword $$y$$ given that codeword $$E(m)$$ was sent. Let $$B_0$$ denote $$B(E(m),(p+\epsilon)n).$$


 * $$\begin{align}

\Pr_{e \in \text{BSC}_p}[D(E(m) + e) \neq m] &= \sum_{y \in \{0,1\}^{n}} p(y|E(m))\cdot 1_{D(y)\neq m} \\ &\leqslant \sum_{y \notin B_0} p(y|E(m)) \cdot 1_{D(y)\neq m} + \sum_{y \in B_0} p(y|E(m))\cdot 1_{D(y)\neq m} \\ &\leqslant 2^{-{\epsilon^2}n} + \sum_{y \in B_0} p(y|E(m)) \cdot 1_{D(y)\neq m} \end{align}$$

We get the last inequality by our analysis using the Chernoff bound above. Now taking expectation on both sides we have,


 * $$\begin{align}

\mathbb{E}_E \left [\Pr_{e \in \text{BSC}_p} [D(E(m) + e) \neq m] \right ] &\leqslant 2^{-{\epsilon^2}n} + \sum_{y \in B_0} p(y|E(m)) \mathbb{E}[1_{D(y)\neq m}] \\ &\leqslant 2^{-{\epsilon^2}n} + \sum_{y \in B_0} \mathbb{E}[1_{D(y)\neq m}] && \sum_{y \in B_0} p(y|E(m)) \leqslant 1 \\ &\leqslant 2^{-{\epsilon^2}n} + 2^{k +H(p + \epsilon)n-n} && \mathbb{E}[1_{D(y)\neq m}] \leqslant 2^{k +H(p + \epsilon)n-n} \text{ (see above)} \\ &\leqslant 2^{-\delta n} \end{align}$$

by appropriately choosing the value of $$\delta$$. Since the above bound holds for each message, we have


 * $$\mathbb{E}_m \left [\mathbb{E}_E \left [\Pr_{e \in \text{BSC}_p} \left [D(E(m) + e) \right ] \neq m \right ] \right ] \leqslant 2^{-\delta n}.$$

Now we can change the order of summation in the expectation with respect to the message and the choice of the encoding function $$E$$. Hence:


 * $$\mathbb{E}_E \left [\mathbb{E}_m \left [\Pr_{e \in \text{BSC}_p} \left [D(E(m) + e) \right ] \neq m \right ] \right ] \leqslant 2^{-\delta n}.$$

Hence in conclusion, by probabilistic method, we have some encoding function $$E^{*}$$ and a corresponding decoding function $$D^{*}$$ such that


 * $$\mathbb{E}_m \left [\Pr_{e \in \text{BSC}_p} \left [D^{*}(E^{*}(m) + e)\neq m \right ] \right ] \leqslant 2^{-\delta n}.$$

At this point, the proof works for a fixed message $$m$$. But we need to make sure that the above bound holds for all the messages $$m$$ simultaneously. For that, let us sort the $$2^k$$ messages by their decoding error probabilities. Now by applying Markov's inequality, we can show the decoding error probability for the first $$2^{k-1}$$ messages to be at most $$2\cdot 2^{-\delta n}$$. Thus in order to confirm that the above bound to hold for every message $$m$$, we could just trim off the last $$2^{k-1}$$ messages from the sorted order. This essentially gives us another encoding function $$E'$$ with a corresponding decoding function $$D'$$ with a decoding error probability of at most $$2^{-\delta n + 1}$$ with the same rate. Taking $$\delta'$$ to be equal to $$\delta - \tfrac{1}{n}$$ we bound the decoding error probability to $$2^{-\delta'n}$$. This expurgation process completes the proof.
 * }

Converse of Shannon's capacity theorem
The converse of the capacity theorem essentially states that $$1 - H(p)$$ is the best rate one can achieve over a binary symmetric channel. Formally the theorem states:

The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. The idea is the sender generates messages of dimension $$k$$, while the channel $$\text{BSC}_p$$ introduces transmission errors. When the capacity of the channel is $$H(p)$$, the number of errors is typically $$2^{H(p + \epsilon)n}$$ for a code of block length $$n$$. The maximum number of messages is $$2^{k}$$. The output of the channel on the other hand has $$2^{n}$$ possible values. If there is any confusion between any two messages, it is likely that $$2^{k}2^{H(p + \epsilon)n} \ge 2^{n}$$. Hence we would have $$k \geq \lceil (1 - H(p + \epsilon)n) \rceil$$, a case we would like to avoid to keep the decoding error probability exponentially small.

Codes
Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels. The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct.

The approach behind the design of codes which meet the channel capacities of $$\text{BSC}$$ or the binary erasure channel $$\text{BEC}$$ have been to correct a lesser number of errors with a high probability, and to achieve the highest possible rate. Shannon's theorem gives us the best rate which could be achieved over a $$\text{BSC}_{p}$$, but it does not give us an idea of any explicit codes which achieve that rate. In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate. The first such code was due to George D. Forney in 1966. The code is a concatenated code by concatenating two different kinds of codes.

Forney's code
Forney constructed a concatenated code $$C^{*} = C_\text{out} \circ C_\text{in}$$ to achieve the capacity of the noisy-channel coding theorem for $$\text{BSC}_p$$. In his code,

For the outer code $$C_\text{out}$$, a Reed-Solomon code would have been the first code to have come in mind. However, we would see that the construction of such a code cannot be done in polynomial time. This is why a binary linear code is used for $$C_\text{out}$$.
 * The outer code $$C_\text{out}$$ is a code of block length $$N$$ and rate $$1-\frac{\epsilon}{2}$$ over the field $$F_{2^k}$$, and $$k = O(\log N)$$. Additionally, we have a decoding algorithm $$D_\text{out}$$ for $$C_\text{out}$$ which can correct up to $$\gamma$$ fraction of worst case errors and runs in $$t_\text{out}(N)$$ time.
 * The inner code $$C_\text{in}$$ is a code of block length $$n$$, dimension $$k$$, and a rate of $$1 - H(p) - \frac{\epsilon}{2}$$. Additionally, we have a decoding algorithm $$D_\text{in}$$ for $$C_\text{in}$$ with a decoding error probability of at most $$\frac{\gamma}{2}$$ over $$\text{BSC}_p$$ and runs in $$t_\text{in}(N)$$ time.

For the inner code $$C_\text{in}$$ we find a linear code by exhaustively searching from the linear code of block length $$n$$ and dimension $$k$$, whose rate meets the capacity of $$\text{BSC}_p$$, by the noisy-channel coding theorem.

The rate $$R(C^{*}) = R(C_\text{in}) \times R(C_\text{out}) = (1-\frac{\epsilon}{2}) ( 1 - H(p) - \frac{\epsilon}{2} ) \geq 1 - H(p)-\epsilon$$ which almost meets the $$\text{BSC}_p$$ capacity. We further note that the encoding and decoding of $$C^{*}$$ can be done in polynomial time with respect to $$N$$. As a matter of fact, encoding $$C^{*}$$ takes time $$O(N^{2})+O(Nk^{2}) = O(N^{2})$$. Further, the decoding algorithm described takes time $$Nt_\text{in}(k) + t_\text{out}(N) = N^{O(1)} $$ as long as $$t_\text{out}(N) = N^{O(1)}$$; and $$t_\text{in}(k) = 2^{O(k)}$$.

Decoding error probability
A natural decoding algorithm for $$C^{*}$$ is to:


 * Assume $$y_{i}^{\prime} = D_\text{in}(y_i), \quad i \in (0, N)$$
 * Execute $$D_\text{out}$$ on $$y^{\prime} = (y_1^{\prime} \ldots y_N^{\prime})$$

Note that each block of code for $$C_\text{in}$$ is considered a symbol for $$C_\text{out}$$. Now since the probability of error at any index $$i$$ for $$D_\text{in}$$ is at most $$\tfrac{\gamma}{2}$$ and the errors in $$\text{BSC}_p$$ are independent, the expected number of errors for $$D_\text{in}$$ is at most $$\tfrac{\gamma N}{2}$$ by linearity of expectation. Now applying Chernoff bound, we have bound error probability of more than $$\gamma N$$ errors occurring to be $$e^\frac{-\gamma N}{6}$$. Since the outer code $$C_\text{out}$$ can correct at most $$\gamma N$$ errors, this is the decoding error probability of $$C^{*}$$. This when expressed in asymptotic terms, gives us an error probability of $$2^{-\Omega(\gamma N)}$$. Thus the achieved decoding error probability of $$C^{*}$$ is exponentially small as the noisy-channel coding theorem.

We have given a general technique to construct $$C^{*}$$. For more detailed descriptions on $$C_\text{in}$$ and $$C_\text{out}$$ please read the following references. Recently a few other codes have also been constructed for achieving the capacities. LDPC codes have been considered for this purpose for their faster decoding time.

Applications
The binary symmetric channel can model a disk drive used for memory storage: the channel input represents a bit being written to the disk and the output corresponds to the bit later being read. Error could arise from the magnetization flipping, background noise or the writing head making an error. Other objects which the binary symmetric channel can model include a telephone or radio communication line or cell division, from which the daughter cells contain DNA information from their parent cell.

This channel is often used by theorists because it is one of the simplest noisy channels to analyze. Many problems in communication theory can be reduced to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels.