Word RAM

In theoretical computer science, the word RAM (word random-access machine) model is a model of computation in which a random-access machine does arithmetic and bitwise operations on a word of $w$ bits. Michael Fredman and Dan Willard created it in 1990 to simulate programming languages like C.

Model
The word RAM model is an abstract machine similar to a random-access machine, but with finite memory and word-length. It works with words of size up to $w$ bits, meaning it can store integers up to $$2^w-1$$. Because the model assumes that the word size matches the problem size, that is, for a problem of size $n$, $$w \ge \log n$$, the word RAM model is a transdichotomous model. The model allows both arithmetic operations and bitwise operations including logical shifts to be done in constant time (the precise instruction set assumed by an algorithm or proof using the model may vary).

Algorithms and data structures
In the word RAM model, integer sorting can be done fairly efficiently. Yijie Han and Mikkel Thorup created a randomized algorithm to sort integers in expected time of (in Big O notation) $$O(n \sqrt{\log \log n})$$, while Han also created a deterministic variant with running time $$O(n \log \log n)$$.

The dynamic predecessor problem is also commonly analyzed in the word RAM model, and was the original motivation for the model. Dan Willard used y-fast tries to solve this in $$O(\log w)$$ time, or, more precisely, $$O(\log\log U)$$ where $n$ is a bound on the values stored. Michael Fredman and Willard also solved the problem using fusion trees in $$O(\log_w n)$$ time. Using exponential search trees, a query can be performed in $$O(\sqrt{\log n / \log\log n})$$.

Additional results in the word RAM model are listed in the article on range searching.

Lower bounds applicable to word RAM algorithms are often proved in the cell-probe model.