Holland's schema theorem

Holland's schema theorem, also called the fundamental theorem of genetic algorithms, is an inequality that results from coarse-graining an equation for evolutionary dynamics. The Schema Theorem says that short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations. The theorem was proposed by John Holland in the 1970s. It was initially widely taken to be the foundation for explanations of the power of genetic algorithms. However, this interpretation of its implications has been criticized in several publications reviewed in, where the Schema Theorem is shown to be a special case of the Price equation with the schema indicator function as the macroscopic measurement.

A schema is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, and hence form a topological space.

Description
Consider binary strings of length 6. The schema  describes the set of all strings of length 6 with 1's at positions 1, 3 and 6 and a 0 at position 4. The * is a wildcard symbol, which means that positions 2 and 5 can have a value of either 1 or 0. The order of a schema $$ o(H)$$ is defined as the number of fixed positions in the template, while the defining length $$ \delta(H) $$ is the distance between the first and last specific positions. The order of  is 4 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. Using the established methods and genetic operators of genetic algorithms, the schema theorem states that short, low-order schemata with above-average fitness increase exponentially in successive generations. Expressed as an equation:


 * $$\operatorname{E}(m(H,t+1)) \geq {m(H,t) f(H) \over a_t}[1-p].$$

Here $$m(H,t)$$ is the number of strings belonging to schema $$H$$ at generation $$t$$, $$f(H)$$ is the observed average fitness of schema $$H$$ and $$a_t$$ is the observed average fitness at generation $$t$$. The probability of disruption $$p$$ is the probability that crossover or mutation will destroy the schema $$H$$. Under the assumption that $$p_m \ll 1$$, it can be expressed as:


 * $$p = {\delta(H) \over l-1}p_c + o(H) p_m$$

where $$ o(H)$$ is the order of the schema, $$l$$ is the length of the code, $$ p_m$$ is the probability of mutation and $$ p_c $$ is the probability of crossover. So a schema with a shorter defining length $$ \delta(H) $$ is less likely to be disrupted. An often misunderstood point is why the Schema Theorem is an inequality rather than an equality. The answer is in fact simple: the Theorem neglects the small, yet non-zero, probability that a string belonging to the schema $$H$$ will be created "from scratch" by mutation of a single string (or recombination of two strings) that did not belong to $$H$$ in the previous generation. Moreover, the expression for $$p$$ is clearly pessimistic: depending on the mating partner, recombination may not disrupt the scheme even when a cross point is selected between the first and the last fixed position of $$H$$.

Limitation
The schema theorem holds under the assumption of a genetic algorithm that maintains an infinitely large population, but does not always carry over to (finite) practice: due to sampling error in the initial population, genetic algorithms may converge on schemata that have no selective advantage. This happens in particular in multimodal optimization, where a function can have multiple peaks: the population may drift to prefer one of the peaks, ignoring the others.

The reason that the Schema Theorem cannot explain the power of genetic algorithms is that it holds for all problem instances, and cannot distinguish between problems in which genetic algorithms perform poorly, and problems for which genetic algorithms perform well.