Alias method

In computing, the alias method is a family of efficient algorithms for sampling from a discrete probability distribution, published in 1974 by Alastair J. Walker. That is, it returns integer values $1 ≤ i ≤ n$ according to some arbitrary discrete probability distribution $p_{i}$. The algorithms typically use $O(n log n)$ or $O(n)$ preprocessing time, after which random values can be drawn from the distribution in $O(1)$ time.

Operation
Internally, the algorithm consults two tables, a probability table $U_{i}$ and an alias table $K_{i}$ (for $1 ≤ i ≤ n$). To generate a random outcome, a fair die is rolled to determine an index $i$ into the two tables. A biased coin is then flipped, choosing a result of $i$ with probability $U_{i}$, or $K_{i}$ otherwise (probability $1 − U_{i}$).

More concretely, the algorithm operates as follows:


 * 1) Generate a uniform random variate $0 ≤ x &lt; 1$.
 * 2) Let $i = ⌊nx⌋ + 1$ and $y = nx + 1 − i$.  (This makes $i$ uniformly distributed on ${1, 2, ..., n}$ and $y$ uniformly distributed on $[0, 1)$.)
 * 3) If $y &lt; U_{i}$, return $i$.  This is the biased coin flip.
 * 4) Otherwise, return $K_{i}$.

An alternative formulation of the probability table, proposed by Marsaglia et al. as the square histogram method, avoids the computation of $y$ by instead checking the condition $x &lt; V_{i} = (U_{i} + i − 1)/n$ in the third step.

Table generation
The distribution may be padded with additional probabilities $p_{i} = 0$ to increase $n$ to a convenient value, such as a power of two.

To generate the two tables, first initialize $U_{i} = np_{i}$. While doing this, divide the table entries into three categories:
 * The "overfull" group, where $U_{i} > 1$,
 * The "underfull" group, where $U_{i} &lt; 1$ and $K_{i}$ has not been initialized, and
 * The "exactly full" group, where $U_{i} = 1$ or $K_{i}$ has been initialized.

If $U_{i} = 1$, the corresponding value $K_{i}$ will never be consulted and is unimportant, but a value of $K_{i} = i$ is sensible. This also avoids problems if the probabilities are represented as fixed-point numbers which cannot represent $U_{i} = 1$ exactly.

As long as not all table entries are exactly full, repeat the following steps:
 * 1) Arbitrarily choose an overfull entry $U_{i} > 1$ and an underfull entry $U_{j} < 1$.  (If one of these exists, the other must, as well.)
 * 2) Allocate the unused space in entry $j$ to outcome $i$, by setting $K_{j} ← i$.
 * 3) Remove the allocated space from entry $i$ by changing $U_{i} ← U_{i} − (1 − U_{j}) = U_{i} + U_{j} − 1$.
 * 4) Entry $j$ is now exactly full.
 * 5) Assign entry $i$ to the appropriate category based on the new value of $U_{i}$.

Each iteration moves at least one entry to the "exactly full" category (and the last moves two), so the procedure is guaranteed to terminate after at most $n&thinsp;−1$ iterations. Each iteration can be done in $O(1)$ time, so the table can be set up in $O(n)$ time.

Vose points out that floating-point rounding errors may cause the guarantee referred to in step 1 to be violated. If one category empties before the other, the remaining entries may have $U_{i}$ set to 1 with negligible error. The solution accounting for floating point is sometimes called the Walker-Vose method or the Vose alias method.

Because of the arbitrary choice in step 1, the alias structure is not unique.

As the lookup procedure is slightly faster if $y &lt; U_{i}$ (because $K_{i}$ does not need to be consulted), one goal during table generation is to maximize the sum of the $U_{i}$. Doing this optimally turns out to be NP hard, but a greedy algorithm comes reasonably close: rob from the richest and give to the poorest. That is, at each step choose the largest $U_{i}$ and the smallest $U_{j}$. Because this requires sorting the $U_{i}$, it requires $O(n log n)$ time.

Efficiency
Although the alias method is very efficient if generating a uniform deviate is itself fast, there are cases where it is far from optimal in terms of random bit usage. This is because it uses a full-precision random variate $x$ each time, even when only a few random bits are needed.

One case arises when the probabilities are particularly well balanced, so many $U_{i} = 1$. For these values of $i$, $K_{i}$ is not needed and generating $y$ is a waste of time. For example if $p_{1} = p_{2} = 1/2$, then a 32-bit random variate $x$ could be used to generate 32 outputs, but the alias method will only generate one.

Another case arises when the probabilities are strongly unbalanced, so many $U_{i} ≈ 0$. For example if $p_{1} = 0.999$ and $p_{2} = 0.001$, then the great majority of the time, only a few random bits are required to determine that case 1 applies. In such cases, the table method described by Marsaglia et al. is more efficient. If we make many choices with the same probability we can on average require much less than one unbiased random bit. Using arithmetic coding techniques arithmetic we can approach the limit given by the binary entropy function.

Literature

 * Donald Knuth, The Art of Computer Programming, Vol 2: Seminumerical Algorithms, section 3.4.1.

Implementations

 * http://www.keithschwarz.com/darts-dice-coins/ Keith Schwarz: Detailed explanation, numerically stable version of Vose's algorithm, and link to Java implementation
 * https://jugit.fz-juelich.de/mlz/ransampl Joachim Wuttke: Implementation as a small C library.
 * https://gist.github.com/0b5786e9bfc73e75eb8180b5400cd1f8 Liam Huang's Implementation in C++
 * https://github.com/joseftw/jos.weightedresult/blob/develop/src/JOS.WeightedResult/AliasMethodVose.cs C# implementation of Vose's algorithm.
 * https://github.com/cdanek/KaimiraWeightedList C# implementation of Vose's algorithm without floating point instability.