Correlation attack

Correlation attacks are a class of cryptographic known-plaintext attacks for breaking stream ciphers whose keystreams are generated by combining the output of several linear-feedback shift registers (LFSRs) using a Boolean function. Correlation attacks exploit a statistical weakness that arises from the specific Boolean function chosen for the keystream. While some Boolean functions are vulnerable to correlation attacks, stream ciphers generated using such functions are not inherently insecure.

Explanation
Correlation attacks become possible when a significant correlation exists between the output state of an individual LFSR in the keystream generator and the output of the Boolean function that combines the output states of all the LFSRs. These attacks are employed in combination with partial knowledge of the keystream, which is derived from partial knowledge of the plaintext. The two are then compared using an XOR logic gate. This vulnerability allows an attacker to brute-force the key for the individual LFSR and the rest of the system separately. For instance, in a keystream generator where four 8-bit LFSRs are combined to produce the keystream, and if one of the registers is correlated to the Boolean function output, it becomes possible to brute force it first, followed by the remaining three LFSRs. As a result, the total attack complexity becomes 28 + 224.

Compared to the cost of launching a brute-force attack on the entire system, with complexity 232, this represents an attack effort saving factor of just under 256. If a second register is correlated with the function, the process may be repeated and decrease the attack complexity down to 28 + 28 + 216 for an effort saving factor of just under 65028.

Geffe generator
One example is the Geffe generator, which consists of three LFSRs: LFSR-1, LFSR-2, and LFSR-3. Let these registers be denoted as: $$x_1$$, $$x_2$$, and $$x_3$$, respectively. Then, the Boolean function combining the three registers to provide the generator output is given by $$F(x_1, x_2, x_3) = (x_1 \wedge x_2) \oplus (\neg x_1 \wedge x_3)$$ (i.e. ($$x_1$$ AND $$x_2$$) XOR (NOT $$x_1$$ AND $$x_3$$)). There are 23 = 8 possible values for the outputs of the three registers, and the value of this combining function for each of them is shown in the table below:

Consider the output of the third register, $$x_3$$. The table above shows that of the 8 possible outputs of $$x_3$$, 6 are equal to the corresponding value of the generator output, $$F(x_1,x_2,x_3)$$. In 75% of all possible cases, $$x_3 = F(x_1,x_2,x_3)$$. Thus LFSR-3 is 'correlated' with the generator. This is a weakness that may be exploited as follows:

An interception can be made on the cipher text $$c_1, c_2, c_3, \ldots, c_n$$ of a plain text $$p_1, p_2, p_3, \ldots$$ which has been encrypted by a stream cipher using a Geffe generator as its keystream generator, i.e. $$c_i = p_i \oplus F(x_{1i}, x_{2i}, x_{3i})$$ for $$i = 1, 2, 3, \ldots, n$$, where $$x_{1i}$$ is the output of LFSR-1 at time $$i$$, etc. It's also possible that part of the plain text, e.g. $$p_1, p_2, p_3, \ldots, p_{32}$$, the first 32 bits of the plaintext (corresponding to 4 ASCII characters of text). This is not entirely improbable considering plain text is a valid XML file, for instance, the first 4 ASCII characters must be "<xml". Similarly, many file formats or network protocols have very standard headers or footers. Given the intercepted $$c_1, c_2, c_3, \ldots, c_{32}$$ and our known/guessed $$p_1, p_2, p_3, \ldots, p_{32}$$, we may easily find $$F(x_{1i}, x_{2i}, x_{3i})$$ for $$i = 1, 2, 3, \ldots, 32$$ by XOR-ing the two together. This makes the 32 consecutive bits of the generator output easy to determine.

This enables a brute-force search of the space of possible keys (initial values) for LFSR-3 (assuming we know the tapped bits of LFSR-3, an assumption which is in line with Kerckhoffs' principle). For any given key in the key space, we may quickly generate the first 32 bits of LFSR-3's output and compare these to our recovered 32 bits of the entire generator's output. Because we have established earlier that there is a 75% correlation between the output of LFSR-3 and the generator, we know we have correctly guessed the key for LFSR-3 if approximately 24 of the first 32 bits of LFSR-3 output will match up with the corresponding bits of generator output. If we have guessed incorrectly, we should expect roughly half, or 16, of the first 32 bits of these two sequences to match. Thus we may recover the key for LFSR-3 independently of the keys of LFSR-1 and LFSR-2. At this stage we have reduced the problem of brute forcing a system of 3 LFSRs to the problem of brute forcing a single LFSR and then a system of 2 LFSRs. The amount of effort saved here depends on the length of the LFSRs. For realistic values, it is a very substantial saving and can make brute-force attacks very practical.

Observe in the table above that $$x_2$$ also agrees with the generator output 6 times out of 8, again a correlation of 75% correlation between $$x_2$$ and the generator output. We may begin a brute force attack against LFSR-2 independently of the keys of LFSR-1 and LFSR-3, leaving only LFSR-1 unbroken. Thus, we are able to break the Geffe generator with as much effort as required to brute force 3 entirely independent LFSRs. This means the Geffe generator is a very weak generator and should never be used to generate stream cipher keystreams.

Note from the table above that $$x_1$$ agrees with the generator output 4 times out of 8—a 50% correlation. We cannot use this to brute force LFSR-1 independently of the others: the correct key will yield output that agrees with the generator output 50% of the time, but on average so will an incorrect key. This represents the ideal situation from a security perspective—the combining function $$F(x_1,x_2,x_3)$$ should be chosen so the correlation between each variable and the combining function's output is as close as possible to 50%. In practice, it may be difficult to find a function that achieves this without sacrificing other design criteria, e.g., period length, so a compromise may be necessary.

Clarifying the statistical nature of the attack
While the above example illustrates well the relatively simple concepts behind correlation attacks, it perhaps simplifies the explanation of precisely how the brute forcing of individual LFSRs proceeds. Incorrectly guessed keys will generate LFSR output that agrees with the generator output roughly 50% of the time because, given two random bit sequences of a given length, the probability of agreement between the sequences at any particular bit is 0.5. However, specific individual incorrect keys may well generate LFSR output that agrees with the generator output more or less often than exactly 50% of the time. This is particularly salient in the case of LFSRs whose correlation with the generator is not especially strong; for small enough correlations, it is certainly not outside the realm of possibility that an incorrectly guessed key will also lead to LFSR output that agrees with the desired number of bits of the generator output. Thus, it may not be possible to identify the unique key to that LFSR. It may be possible to identify a number of potential keys, however, which is still a significant breach of the cipher's security. Moreover, given a megabyte of known plain text, the situation would be substantially different. An incorrect key may generate LFSR output that agrees with more than 512 kilobytes of the generator output but is not likely to generate output that agrees with as much as 768 kilobytes of the generator output as a correctly guessed key would. As a rule, the weaker the correlation between an individual register and the generator output, the more known plain text is required to find that register's key with a high degree of confidence. Estimates of the length of known plain text required for a given correlation can be calculated using the binomial distribution.

Definition
The correlations which were exploited in the example attack on the Geoff generator are examples of what are called first order correlations: they are correlations between the value of the generator output and an individual LFSR. It is possible to define higher-order correlations in addition to these. For instance, it may be possible that while a given Boolean function has no strong correlations with any of the individual registers it combines, a significant correlation may exist between some Boolean function of two of the registers, e.g., $$x_1 \oplus x_2$$. This would be an example of a second order correlation. Third order correlations and higher can be defined in this way.

Higher-order correlation attacks can be more powerful than single-order correlation attacks, however, this effect is subject to a "law of limiting returns". The table below shows a measure of the computational cost for various attacks on a keystream generator consisting of eight 8-bit LFSRs combined by a single Boolean function. Understanding the calculation of cost is relatively straightforward: the leftmost term of the sum represents the size of the key space for the correlated generators, and the rightmost term represents the size of the key space for the remaining generators.

While higher-order correlations lead to more powerful attacks, they are also more difficult to find, as the space of available Boolean functions to correlate against the generator output increases as the number of arguments to the function does.

Terminology
A Boolean function $$F(x_1, \ldots, x_n)$$ of $n$ variables is said to be "$m$-th order correlation immune", or to have "$m$th order correlation immunity" for some integer $m$, if no significant correlation exists between the function's output and any Boolean function of $m$ of its inputs. For example, a Boolean function that has no first-order or second-order correlations, but which does have a third-order correlation exhibits 2nd order correlation immunity. Obviously, higher correlation immunity makes a function more suitable for use in a keystream generator (although this is not the only thing that needs to be considered).

Siegenthaler showed that the correlation immunity $m$ of a Boolean function of algebraic degree $d$ of $n$ variables satisfies $$m + d \leq n$$; for a given set of input variables, this means that a high algebraic degree will restrict the maximum possible correlation immunity. Furthermore, if the function is balanced then $$m \leq n - 1$$.

It follows that it is impossible for a function of $n$ variables to be $n$th order correlation immune. This also follows from the fact that any such function can be written using a Reed-Muller basis as a combination of XORs of the input functions.

Cipher design implications
Given the probable extreme severity of a correlation attack's impact on a stream cipher's security, it should be essential to test a candidate Boolean combination function for correlation immunity before deciding to use it in a stream cipher. However, it is important to note that high correlation immunity is a necessary, but not sufficient condition for a Boolean function to be appropriate for use in a keystream generator. There are other issues to consider, for example, whether or not the function is balanced - whether it outputs as many or roughly as many 1's as it does 0's when all possible inputs are considered.

Research has been conducted into methods for easily generating Boolean functions of a given size which are guaranteed to have at least some particular order of correlation immunity. This research has uncovered links between correlation immune Boolean functions and error correcting codes.