Two-way string-matching algorithm

In computer science, the two-way string-matching algorithm is a string-searching algorithm, discovered by Maxime Crochemore and Dominique Perrin in 1991. It takes a pattern of size m, called a “needle”, preprocesses it in linear time O(m), producing information that can then be used to search for the needle in any “haystack” string, taking only linear time O(n) with n being the haystack's length.

The two-way algorithm can be viewed as a combination of the forward-going Knuth–Morris–Pratt algorithm (KMP) and the backward-running Boyer–Moore string-search algorithm (BM). Like those two, the 2-way algorithm preprocesses the pattern to find partially repeating periods and computes “shifts” based on them, indicating what offset to “jump” to in the haystack when a given character is encountered.

Unlike BM and KMP, it uses only O(log m) additional space to store information about those partial repeats: the search pattern is split into two parts (its critical factorization), represented only by the position of that split. Being a number less than m, it can be represented in ⌈log₂ m⌉ bits. This is sometimes treated as "close enough to O(1) in practice", as the needle's size is limited by the size of addressable memory; the overhead is a number that can be stored in a single register, and treating it as O(1) is like treating the size of a loop counter as O(1) rather than log of the number of iterations. The actual matching operation performs at most 2n − m comparisons.

Breslauer later published two improved variants performing fewer comparisons, at the cost of storing additional data about the preprocessed needle:
 * The first one performs at most n + ⌊(n − m)/2⌋ comparisons, ⌈(n − m)/2⌉ fewer than the original. It must however store ⌈log$\varphi$ m⌉ additional offsets in the needle, using O(log2 m) space.
 * The second adapts it to only store a constant number of such offsets, denoted c, but must perform n + ⌊($1/2$ + ε) * (n − m)⌋ comparisons, with ε = $1/2$(Fc+2 − 1)−1 = O($\varphi$−c) going to zero exponentially quickly as c increases.

The algorithm is considered fairly efficient in practice, being cache-friendly and using several operations that can be implemented in well-optimized subroutines. It is used by the C standard libraries glibc, newlib, and musl, to implement the memmem and strstr family of substring functions. As with most advanced string-search algorithms, the naïve implementation may be more efficient on small-enough instances; this is especially so if the needle isn't searched in multiple haystacks, which would amortize the preprocessing cost.

Critical factorization
Before we define critical factorization, we should define:
 * Factorization: a string is considered factorized when it is split into two parts. Suppose a string $x$ is split into two parts $(u, v)$, then $(u, v)$ is called a factorization of $x$.
 * Period: a period $p$ for a string $x$ is defined as a value such that for any integer $0 &lt; i &le; len(x) − p$, $x[i] = x[i + p]$. In other words, "$p$ is a period of $x$ if two letters of $x$ at distance $p$ always coincide". The minimum period of $x$ is a positive integer denoted as $p(x)$.
 * A repetition $w$ in $(u, v)$ is a string such that:
 * $w$ is a suffix of $u$ or $u$ is a suffix of $w$;
 * $w$ is a prefix of $v$ or $v$ is a prefix of $w$;
 * In other words, $w$ occurs on both sides of the cut with a possible overflow on either side. Each factorization trivially has at least one repetition, the string $vu$.
 * A local period is the length of a repetition in $(u, v)$. The smallest local period in $(u, v)$ is denoted as $r(u, v)$. For any factorization, $0 &lt; r(u, v) ≤ len(x)$.
 * A critical factorization is a factorization $(u, v)$ of $x$ such that $r(u, v) = p(x)$. For a needle of length $m$ in an ordered alphabet, it can be computed in $2m$ comparisons, by computing the lexicographically larger of two ordered maximal suffixes, defined for order ≤ and ≥.

The algorithm
The algorithm starts by critical factorization of the needle as the preprocessing step. This step produces the index (starting point) of the periodic right-half, and the period of this stretch. The suffix computation here follows the authors' formulation. It can alternatively be computed using the Duval's algorithm, which is simpler and still linear time but slower in practice.

Shorthand for inversion. function cmp(a, b)    if a > b return 1 if a = b return 0 if a < b return -1 function maxsuf(n, rev) l ← len(n) p ← 1      currently known period. k ← 1      index for period testing, 0 < k <= p. j ← 0      ''index for maxsuf testing. greater than maxs.'' i ← -1     the proposed starting index of maxsuf while j + k < l        cmpv ← cmp(n[j + k], n[i + k]) if rev cmpv ← -cmpv  invert the comparison if cmpv < 0 ''Suffix (j+k) is smaller. Period is the entire prefix so far.'' j ← j + k            k ← 1 p ← j - i        else if cmpv = 0 They are the same - we should go on. if k = p                We are done checking this stretch of p. reset k. j ← j + p                k ← 1 else k ← k + 1 else ''Suffix is larger. Start over from here.'' i ← j            j ← j + 1 p ← 1 k ← 1 return [i, p] function crit_fact(n) [idx1, per1] ← maxsuf(n, false) [idx2, per2] ← maxsuf(n, true) if idx1 > idx2 return [idx1, per1] else return [idx2, per2]

The comparison proceeds by first matching for the right-hand-side, and then for the left-hand-side if it matches. Linear-time skipping is done using the period.

function match(n, h)    nl ← len(n) hl ← len(h) [l, p] ← crit_fact(n) P ← {}                 set of matches. Match the suffix. Use a library function like memcmp, or write your own loop. if n[0] ... n[l] = n[l+1] ... n[l+p] P ← {} pos ← 0 s ← 0 ''TODO. At least put the skip in.''