Chvátal–Sankoff constants

In mathematics, the Chvátal–Sankoff constants are mathematical constants that describe the lengths of longest common subsequences of random strings. Although the existence of these constants has been proven, their exact values are unknown. They are named after Václav Chvátal and David Sankoff, who began investigating them in the mid-1970s.

There is one Chvátal–Sankoff constant $$\gamma_k$$ for each positive integer k, where k is the number of characters in the alphabet from which the random strings are drawn. The sequence of these numbers grows inversely proportionally to the square root of k. However, some authors write "the Chvátal–Sankoff constant" to refer to $$\gamma_2$$, the constant defined in this way for the binary alphabet.

Background
A common subsequence of two strings S and T is a string whose characters appear in the same order (not necessarily consecutively) both in S and in T. The problem of computing a longest common subsequence has been well studied in computer science. It can be solved in polynomial time by dynamic programming; this basic algorithm has additional speedups for small alphabets (the Method of Four Russians), for strings with few differences, for strings with few matching pairs of characters, etc. This problem and its generalizations to more complex forms of edit distance have important applications in areas that include bioinformatics (in the comparison of DNA and protein sequences and the reconstruction of evolutionary trees), geology (in stratigraphy), and computer science (in data comparison and revision control).

One motivation for studying the longest common subsequences of random strings, given already by Chvátal and Sankoff, is to calibrate the computations of longest common subsequences on strings that are not random. If such a computation returns a subsequence that is significantly longer than what would be obtained at random, one might infer from this result that the match is meaningful or significant.

Definition and existence
The Chvátal–Sankoff constants describe the behavior of the following random process. Given parameters n and k, choose two length-n strings S and T from the same k-symbol alphabet, with each character of each string chosen uniformly at random, independently of all the other characters. Compute a longest common subsequence of these two strings, and let $$\lambda_{n,k}$$ be the random variable whose value is the length of this subsequence. Then the expected value of $$\lambda_{n,k}$$ is (up to lower-order terms) proportional to n, and the kth Chvátal–Sankoff constant $$\gamma_k$$ is the constant of proportionality.

More precisely, the expected value $$\operatorname{E}[\lambda_{n,k}]$$ is superadditive: for all m and n, $$\operatorname{E}[\lambda_{m+n,k}]\ge \operatorname{E}[\lambda_{m,k}]+\operatorname{E}[\lambda_{n,k}]$$. This is because, if strings of length m + n are broken into substrings of lengths m and n, and the longest common subsequences of those substrings are found, they can be concatenated together to get a common substring of the whole strings. It follows from a lemma of Michael Fekete that the limit
 * $$\gamma_k = \lim_{n\to\infty} \frac{\operatorname{E}[\lambda_{n,k}]}{n}$$

exists, and equals the supremum of the values $$\operatorname{E}[\lambda_{n,k}]/n$$. These limiting values $$\gamma_k$$ are the Chvátal–Sankoff constants.

Bounds
The exact values of the Chvátal–Sankoff constants remain unknown, but rigorous upper and lower bounds have been proven.

Because $$\gamma_k$$ is a supremum of the values $$\operatorname{E}[\lambda_{n,k}]$$ which each depend only on a finite probability distribution, one way to prove rigorous lower bounds on $$\gamma_k$$ would be to compute the exact values of $$\operatorname{E}[\lambda_{n,k}]$$; however, this method scales exponentially in n, so it can only be implemented for small values of n, leading to weak lower bound. In his Ph.D. thesis, Vlado Dančík pioneered an alternative approach in which a deterministic finite automaton is used to read symbols of two input strings and produce a (long but not optimal) common subsequence of these inputs. The behavior of this automaton on random inputs can be analyzed as a Markov chain, the steady state of which determines the rate at which it finds elements of the common subsequence for large values of n. This rate is necessarily a lower bound on the Chvátal–Sankoff constant. By using Dančík's method, with an automaton whose state space buffers the most recent h characters from its two input strings, and with additional techniques for avoiding the expensive steady-state Markov chain analysis of this approach, was able to perform a computerized analysis with n = 15 that proved $$\gamma_2\ge 0.788071$$. Similar methods can be generalized to non-binary alphabets. Lower bounds obtained in this way for various values of k are:

also used automata-theoretic methods to prove upper bounds on the Chvátal–Sankoff constants, and again extended these results by computerized calculations. The upper bound he obtained was $$\gamma_2\le 0.826280$$. This result disproved a conjecture of J. Michael Steele that $$\gamma_2 = 2/(1+\sqrt 2)$$, because this value is greater than the upper bound. Non-rigorous numerical evidence suggests that $$\gamma_2$$ is approximately $$0.811$$, closer to the upper bound than the lower bound.

In the limit as k goes to infinity, the constants $$\gamma_k$$ grow inversely proportionally to the square root of k. More precisely,
 * $$\lim_{k\to\infty} \gamma_k \sqrt k = 2.$$

Distribution of LCS lengths
There has also been research into the distribution of values of the longest common subsequence, generalizing the study of the expectation of this value. For instance, the standard deviation of the length of the longest common subsequence of random strings of length n is known to be proportional to the square root of n.

One complication in performing this sort of analysis is that the random variables describing whether the characters at different pairs of positions match each other are not independent of each other. For a more mathematically tractable simplification of the longest common subsequence problem, in which the allowed matches between pairs of symbols are not controlled by whether those symbols are equal to each other but instead by independent random variables with probability 1/k of being 1 and (k &minus; 1)/k of being 0, it has been shown that the distribution of the longest common subsequence length is controlled by the Tracy–Widom distribution.