User:Boutet.n

This improvements comes from the observation that more than 50% of the cryptanalysis time is devoted to detect false alarms. Checkpoints make possible to detect a lot of false alarm without having to reconstruct the sequence from starting point to endpoint.

=Concept=

Checkpoints
The idea is to define a set of positions $$\alpha_i$$ in the chains to be checkpoints. When a chain reaches such a position, we compute the value of a given function $$G$$ for the corresponding value in the chain. This calculated $$G(X_{j,\alpha_i})$$ value is called checkpoint and stored with the end of the chain.

Use of the checkpoints
During the online phase, when we generate the values $$Y_i$$, we also calculate the value of $$G(Y_{\alpha_{i+s-t}})$$.

When we reach an endpoint, we first control the values of the checkpoints. If they differ at least for one checkpoint, we know that it is a false alarm. If not, we have to generate the whole chain as in the Hellman's Time-Memory Trade-Off method.

The choice of $$G$$ must be easy to compute and cheap to store. One choice could simply be a function $$G(X)$$ that outputs the less significant bit of $$X$$. With such a function, we detect 50% of the false alarms passing through one checkpoint, 75% of the false alarms passing through two checkpoints, and so on.

=Theoretical analysis= The time gain can has been computed and is given in the article in reference.

We can also compute the trade-off of the checkpoint method. Let the memory cost of rainbow chain be $$M$$ and $$M'$$ for the checkpoints method. Let define $$T$$ and $$T'$$ the time gain. We can define $$M'=\sigma_M M$$ and $$T'=\sigma_T T$$. The extra memory cost of the checkpoints method on the classic rainbow method is given by: $$\frac{M'}{M}-1=\sigma_M-1$$. On the same way, the time gain is given by: $$1-\frac{T'}{T}=1-\sigma_T$$.

Given that $$T \propto \frac{N^2}{M^2}$$, we find $$1-\sigma_T=1-\frac{1}{\sigma_M^2}$$. So instead of storing additional chains, we can use the memory to store checkpoints.

Numerical results are impressive. An additional $$0.89\%$$ of memory save $$10.99\%$$ of time. The gain storing extra chains in the same amount of memory is only $$1.76\%$$.

=Some more improvements=

Storing the chain endpoints
During the generation of the lookup table, the function $$f$$ is composed of 2 parts: the enciphering of the plaintext with the key and the reduction of the ciphertext to the size of a potential key. As the result of the reduction is smaller than the ciphertext, it is better to store the potential keys than the ciphertexts.

As the endpoint have to be sorted, successive one often have the same prefix. An other optimization is to remove a certain length prefix and replace it by an index table table that gives the prefixes.

Storing the chain starting points
As we have the choice of the starting points, it is possible to choose successive keys as starting points. This means that every starting point would have the same prefix which needs to be stored only one. The table could thus only stores the lasts bits of the starting points.

=See also=
 * Home: Time-Memory Trade-Off
 * Back: Rainbow chains
 * Next: Fingerprints

=External links=
 * LINK TO Art 3