Oblivious RAM

An Oblivious RAM (ORAM) simulator is a compiler that transforms an algorithm in such a way that the resulting algorithm preserves the input-output behavior of the original algorithm but the distribution of the memory access patterns of the transformed algorithm is independent of the memory access pattern of the original algorithm.

The use of ORAMs is motivated by the fact that an adversary can obtain nontrivial information about the execution of a program and the data that the program is using just by observing the pattern in which the program accesses various memory locations during its execution. An adversary can get this information even if the data in memory is encrypted.

This definition is suited for settings like protected programs running on unprotected shared memory or clients running programs on their systems by accessing previously stored data on a remote server. The concept was formulated by Oded Goldreich and Rafail Ostrovsky in 1996.

Definition
A Turing machine (TM), a mathematical abstraction of a real computer (program), is said to be oblivious if, for any two inputs of the same length, the motions of the tape heads remain the same. Pippenger and Fischer proved that every TM with running time $$T(n)$$ can be made oblivious and that the running time of the oblivious TM is $$O(T(n)\log T(n))$$. A more realistic model of computation is the RAM model. In the RAM model of computation, there is a CPU that can execute the basic mathematical, logical, and control instructions. The CPU is also associated with a few registers and a physical random access memory, where it stores the operands of its instructions. The CPU also has instructions to read the contents of a memory cell and write a specific value to a memory cell. The definition of ORAMs captures a similar notion of obliviousness for memory accesses in the RAM model.

Informally, an ORAM is an algorithm at the interface of a protected CPU and the physical RAM such that it acts like a RAM to the CPU by querying the physical RAM for the CPU while hiding information about the actual memory access pattern of the CPU from the physical RAM. In other words, the distribution of memory accesses of two programs that make the same number of memory accesses to the RAM is indistinguishable from each other. This description will still make sense if the CPU is replaced by a client with a small storage and the physical RAM is replaced with a remote server with a large storage capacity, where the data of the client resides.

The following is a formal definition of ORAMs. Let $$\Pi$$ denote a program requiring memory of size $$n$$ when executing on an input $$x$$. Suppose that $$\Pi$$ has instructions for basic mathematical and control operations in addition to two special instructions $$\mathsf{read}(l)$$ and $$\mathsf{write}(l,v)$$, where $$\mathsf{read}(l)$$ reads the value at location $$l$$ and $$\mathsf{write}(l,v)$$ writes the value $$v$$ to $$l$$. The sequence of memory cells accessed by a program $$\Pi$$ during its execution is called its memory access pattern and is denoted by $$\tilde{\Pi}(n,x)$$.

A polynomial-time algorithm $$C$$ is an Oblivious RAM (ORAM) compiler with computational overhead $$c(\cdot)$$ and memory overhead $$m(\cdot)$$, if $$C$$ given $$n\in N$$ and a deterministic RAM program $$\Pi$$ with memory-size $$n$$ outputs a program $$\Pi_0$$ with memory-size $$m(n)\cdot n$$ such that for any input $$x$$, the running-time of $$\Pi_0(n, x)$$ is bounded by $$c(n)\cdot T$$ where $$T$$ is the running-time of $$\Pi(n, x)$$, and there exists a negligible function $$\mu$$ such that the following properties hold:
 * Correctness: For any $$n \in \mathbb{N}$$ and any string $$x \in \{0, 1\}^*$$, with probability at least $$1- \mu(n)$$, $$\Pi(n, x) = \Pi_0(n, x)$$.
 * Obliviousness: For any two programs $$\Pi_1, \Pi_2$$, any $$n \in \mathbb{N}$$ and any two inputs, $$x_1, x_2 \in \{0, 1\}^*$$ if $$|\tilde{\Pi}_1(n, x_1)| = |\tilde{\Pi}_2(n, x_2)|$$, then $${\tilde{\Pi}_1}'(n, x_1)$$ is $$\mu$$-close to $${\tilde{\Pi}_2}'(n, x_2)$$ in statistical distance, where $${\Pi_1}' = C(n, \Pi_1)$$ and $${\Pi_2}' = C(n, \Pi_2)$$.

Note that the above definition uses the notion of statistical security. One can also have a similar definition for the notion of computational security.

History of ORAMs
ORAMs were introduced by Goldreich and Ostrovsky,  where the key motivation was stated as providing software protection from an adversary who can observe a program's memory access pattern (but not the contents of the memory).

The main result in this work is that there exists an ORAM compiler that uses $$O(n)$$ server space and incurs a running time overhead of $${O(\log^3 n)}$$ when making a program that uses $$n$$ memory cells oblivious. There are several attributes that need to be considered when comparing various ORAM constructions. The most important parameters of an ORAM construction's performance are the client-side space overhead, server-side space overhead, and the time overhead required to make one memory access. Based on these attributes, the construction of Asharov et al., called "OptORAMa", is the first optimal ORAM construction. It achieves $$O(1)$$ client storage, $$O(n)$$ server storage, and $$O(\log n)$$ access overhead, matching the known lower bounds.

Another important attribute of an ORAM construction is whether the access overhead is amortized or worst-case. Several earlier ORAM constructions have good amortized access overhead guarantees but have $$\Omega(N)$$ worst-case access overheads. Some ORAM constructions with polylogarithmic worst-case computational overheads are. The constructions of  were in the random oracle model, where the client assumes access to an oracle that behaves like a random function and returns consistent answers for repeated queries. Access to the oracle could be replaced by a pseudorandom function whose seed is a secret key stored by the client, if one assumes the existence of one-way functions. The papers were aimed at removing this assumption completely. The authors of also achieve an access overhead of $$O(\log^3 n)$$

While most of the earlier works focus on proving security computationally, there are more recent works   that use the stronger statistical notion of security.

One of the only known lower bounds on the access overhead of ORAMs is due to Goldreich et al. They show a $$\Omega(\log{n})$$ lower bound for ORAM access overhead, where $$n$$ is the data size. Another lower bound is by Larsen and Nielsen. There is also a conditional lower bound on the access overhead of ORAMs due to Boyle et al. that relates this quantity with that of the size of sorting networks.

Trivial construction
A trivial ORAM simulator construction, for each read or write operation, reads from and writes to every single element in the array, only performing a meaningful action for the address specified in that single operation. The trivial solution thus, scans through the entire memory for each operation. This scheme incurs a time overhead of $$\Omega(n)$$ for each memory operation, where $n$ is the size of the memory.

A simple ORAM scheme
A simple version of a statistically secure ORAM compiler constructed by Chung and Pass is described in the following along with an overview of a proof of its correctness. The compiler on input $n$ and a program $&Pi;$ with its memory requirement $n$, outputs an equivalent oblivious program $&Pi;&prime;$.

If the input program $&Pi;$ uses $r$ registers, the output program $&Pi;&prime;$ will need $$r+n/{\alpha}+\text{poly}\log{n}$$ registers, where $$\alpha>1$$ is a parameter of the construction. $&Pi;&prime;$ uses $$O(n \text{ poly} \log n)$$ memory and its (worst-case) access overhead is $$O(\text{poly}\log n)$$.

The ORAM compiler is very simple to describe. Suppose that the original program $&Pi;$ has instructions for basic mathematical and control operations in addition to two special instructions $$\mathsf{read}(l)$$ and $$\mathsf{write}(l,v)$$, where $$\mathsf{read}(l)$$ reads the value at location $l$ and $$\mathsf{write}(l,v)$$ writes the value $v$ to $l$. The ORAM compiler, when constructing $&Pi;&prime;$, simply replaces each and  instructions with subroutines  and  and keeps the rest of the program the same. It may be noted that this construction can be made to work even for memory requests coming in an online fashion.



Memory organization of the oblivious program
The program $&Pi;&prime;$ stores a complete binary tree $T$ of depth $$d=\log (n/\alpha)$$ in its memory. Each node in $T$ is represented by a binary string of length at most $d$. The root is the empty string, denoted by $&lambda;$. The left and right children of a node represented by the string $$\gamma$$ are $$\gamma_0$$ and $$\gamma_1$$ respectively. The program $&Pi;&prime;$ thinks of the memory of $&Pi;$ as being partitioned into blocks, where each block is a contiguous sequence of memory cells of size $&alpha;$. Thus, there are at most $$\lceil n /\alpha \rceil$$ blocks in total. In other words, the memory cell $r$ corresponds to block $$b=\lfloor r/\alpha \rfloor$$.



At any point in time, there is an association between the blocks and the leaves in $T$. To keep track of this association, $&Pi;&prime;$ also stores a data structure called a position map, denoted by $$Pos$$, using $$O(n/\alpha)$$ registers. This data structure, for each block $b$, stores the leaf of $T$ associated with $b$ in $$Pos(b)$$.

Each node in $T$ contains an array with at most $K$ triples. Each triple is of the form $$(b,Pos(b),v)$$, where $b$ is a block identifier and $v$ is the contents of the block. Here, $K$ is a security parameter and is $$O(\text{poly} \log n)$$.

Description of the oblivious program
The program $&Pi;&prime;$ starts by initializing its memory as well as registers to $&perp;$. Describing the procedures, and  is enough to complete the description of $&Pi;&prime;$. The sub-routine is given below. The inputs to the sub-routine are a memory location $$l \in [n]$$ and the value $v$ to be stored at the location $l$. It has three main phases, namely FETCH, PUT_BACK, and FLUSH.

input: a location $l$, a value $v$

Procedure FETCH // Search for the required block. $$b\leftarrow \lfloor l/ \alpha \rfloor$$   // $b$ is the block containing $l$. $$i\leftarrow l\mod \alpha$$   // $i$ is $l$'s component in block $b$. $$pos\leftarrow Pos(b)$$ if $$pos =\perp$$ then $$pos\leftarrow_R [n/ \alpha]$$. // Set $pos$ to a uniformly random leaf in $T$. flag $$\leftarrow 0$$. for each node $N$ on the path from the root to $pos$ do if $N$ has a triple of the form $$(b,pos,x)$$ then Remove $$(b,pos,x)$$ from $N$, store $x$ in a register, and write back the updated $N$ to $T$. flag $$\leftarrow 1$$. else Write back $N$ to $T$.

Procedure PUT_BACK // Add back the updated block at the root. $$pos'\leftarrow_R [n/ \alpha]$$. // Set $pos'$ to a uniformly random leaf in $T$. if flag$$=1$$ then Set $x'$ to be the same as $x$ except for $v$ at the $i$-th position. else Set $x'$ to be a block with $v$ at $i$-th position and $&perp;$'s everywhere else. if there is space left in the root then Add the triple $$(b,pos',x')$$ to the root of $T$. else Abort outputting overflow.

Procedure FLUSH // Push the blocks present in a random path as far down as possible. $$pos^*\leftarrow_R [n/ \alpha]$$. // Set $$pos^*$$ to a uniformly random leaf in $T$. for each triple $$(b,pos,v'')$$ in the nodes traversed the path from the root to $$pos^*$$ Push down this triple to the node $N$ that corresponds to the longest common prefix of $$pos''$$ and $$pos^*$$. if at any point some bucket is about to overflow then Abort outputting overflow.

The task of the FETCH phase is to look for the location $l$ in the tree $T$. Suppose $pos$ is the leaf associated with the block containing location $l$. For each node $N$ in $T$ on the path from root to $pos$, this procedure goes over all triples in $N$ and looks for the triple corresponding to the block containing $l$. If it finds that triple in $N$, it removes the triple from $N$ and writes back the updated state of $N$. Otherwise, it simply writes back the whole node $N$.

In the next phase, it updates the block containing $l$ with the new value $v$, associates that block with a freshly sampled uniformly random leaf of the tree, and writes back the updated triple to the root of $T$.

The last phase, which is called FLUSH, is an additional operation to release the memory cells in the root and other higher internal nodes. Specifically, the algorithm chooses a uniformly random leaf $$pos^*$$ and then tries to push down every node as much as possible along the path from root to $$pos^*$$. It aborts outputting an overflow if at any point some bucket is about to overflow its capacity.

The sub-routine is similar to. For the sub-routine, the input is just a memory location $$l \in [n]$$ and it is almost the same as. In the FETCH stage, if it does not find a triple corresponding to location $l$, it returns $&perp;$ as the value at location $l$. In the PUT_BACK phase, it will write back the same block that it read to the root, after associating it with a freshly sampled uniformly random leaf.

Correctness of the simple ORAM scheme
Let $C$ stand for the ORAM compiler that was described above. Given a program $&Pi;$, let $&Pi;&prime;$ denote $$C(\Pi)$$. Let $$\Pi(n,x)$$ denote the execution of the program $&Pi;$ on an input $x$ using $n$ memory cells. Also, let $$\tilde{\Pi}(n,x)$$ denote the memory access pattern of $$\Pi(n,x)$$. Let $&mu;$ denote a function such that for any $$n \in \mathbb{N}$$, for any program $&Pi;$ and for any input $$x \in \{0,1\}^*$$, the probability that $$\Pi'(n,x)$$ outputs an overflow is at most $$\mu(n)$$. The following lemma is easy to see from the description of $C$.


 * Equivalence Lemma: Let $$n \in \mathbb{N}$$ and $$x \in \{0,1\}^*$$. Given a program $&Pi;$, with probability at least $$1 - \mu(n)$$, the output of $$\Pi'(n,x)$$ is identical to the output of $$\Pi(n,x)$$.

It is easy to see that each and  operation traverses root-to-leaf paths in $T$ chosen uniformly and independently at random. This fact implies that the distribution of memory access patterns of any two programs that make the same number of memory accesses are indistinguishable if they both do not overflow.


 * Obliviousness Lemma: Given two programs $\Pi_1$ and $\Pi_2$ and two inputs $$x_1,x_2 \in \{0,1\}^*$$ such that $$|\tilde{\Pi_1}(x_1,n)| = |\tilde{\Pi_2}(x_2,n)|$$, with probability at least $$1 - 2\mu(n)$$, the access patterns $$\tilde{\Pi_1'}(x_1,n)$$ and $$\tilde{\Pi_2'}(x_2,n)$$ are identical.

The following lemma completes the proof of correctness of the ORAM scheme.


 * Overflow Lemma:There exists a negligible function $&mu;$ such that for every program $&Pi;$, every $n$ and input $x$, the program $$\Pi'(n,x)$$ outputs overflow with probability at most $$\mu(n)$$.

Computational and memory overheads
During each and  operation, two random root-to-leaf paths of $T$ are fully explored by $&Pi;&prime;$. This takes $$O(K\cdot\log (n/\alpha))$$ time. This is the same as the computational overhead and is $$O(\text{poly}\log n)$$ since $K$ is $$O(\text{poly}\log n)$$.

The total memory used up by $&Pi;&prime;$ is equal to the size of $T$. Each triple stored in the tree has $$\alpha + 2$$ words in it and thus there are $$K(\alpha + 2)$$ words per node of the tree. Since the total number of nodes in the tree is $$O(n/\alpha)$$, the total memory size is $$O(nK)$$ words, which is $$O(n\text{ poly}\log n)$$. Hence, the memory overhead of the construction is $$O(\text{poly}\log n)$$.

<!--

Definition
Suppose RAM: $$[1, 2, 3, ..., n]$$;

Sequence: S= $$[OP_1, OP_2, OP_3, ..., OP_m]$$ where each $$OP_i$$ is read [address] or write [address, data] operation;

There are various sequences of operations that result in the same configuration of RAM. We want an algorithm that, given sequence S, produces another sequence S’ that performs the same task in a RAM-Oblivious way. In other words, the access pattern of S' is independent of S. Thus, the RAM is Oblivious RAM.

The definition of a memory oblivious algorithm is as follows: “A program P is ‘memory oblivious’ if the memory access pattern is independent of input S”. Formally speaking, let Access Pattern AP(x) be the sequence of memory accesses for program P with input x.

P is memory oblivious means $$\forall x,y ((x\neq y)\land(\left\vert x \right\vert = \left\vert y \right\vert)) \Longrightarrow AP(x) \approx AP(y)$$

This definition of memory oblivious helps us state the definition of Oblivious RAM problem as: Given a program P, what is the most efficient program Q=P such that Q is memory oblivious. In this case, once the inputs are same, we can get identical outputs.

Formal definition
We begin by defining the access pattern at the sequence of memory locations which the CPU accesses during computation. This definition applies also to an oracle-CPU (The CPU interaction with MEMORY is a sequence of triples $$(i, a, v)$$ of “instruction”, “address” and “values” respectively).

Definition of Access Pattern: The access pattern, denoted $$\mathcal{A}^k(y)$$, of a deterministic $$RAM_k$$ on input $y$ is a sequence $$(a_1,...,a_i,...)$$, such that for every $i$, the $$i-th$$ message sent by $$CPU_k$$, when interacting with $$MEM_k(y)$$, is of the form$$(.,a_i,.)$$. Similarly, we can define the access pattern of an $$oracle-RAM_k$$ on a specific input $y$ and access to a specific function $f$.

Considering the probabilistic-RAMs, we define a random variable which for every possible function $f$ assigns the access pattern which corresponds to computations in which the RAM has access to this function.

Definition of Access Pattern of a Probabilistic-RAM: The access pattern, denoted $${\mathcal\bar{A}^k(y)}$$, of a probabilistic-$$RAM_k$$ on input $y$ is a random variable which assumes the value of the access pattern of $$RAM_k$$ on a specific input $y$ and access to a uniformly selected function $f$.

Now, we are ready to define an $$Oblivious RAM$$. We define an $$Oblivious RAM$$ to be a probabilistic $$RAM$$ for which the probability distribution of the sequence of memory addresses accessed during an execution depends only on the running time(i.e., is independent of particular input).

Formal definition of $$Oblivious RAM$$: For every $$k\in N$$, we define an oblivious $$RAM_k$$ as a probabilistic-$$RAM_k$$ satisfying the following condition. For every two strings, $$y_1$$ and $$y_2$$, if $$\left\vert {\mathcal\bar{A}^k(y_1)} \right\vert$$ and $$\left\vert {\mathcal\bar{A}^k(y_2)} \right\vert$$ are identically distributed then so are $${\mathcal\bar{A}^k(y_1)}$$ and $${\mathcal\bar{A}^k(y_2)}$$. Intuitively, the sequence of memory accesses of an oblivious $$RAM_k$$ reveals no information about the input( to the $$RAM_k$$), beyond the running-time for the input.

Oblivious simulation
Now that we have define both $$RAM$$ and $$Oblivious RAM$$, it is left only specify what is meant by an $$Oblivious$$ $$Simulation$$of an arbitrary $$RAM$$ program on an $$Oblivious RAM$$, our notion of simulation is minimal one: it only requires that both machines compute the same function. The $$RAM$$ simulation presented in the sequel are simulations in a much stronger sense: specifically, they are “on-line”. On the other hand, an oblivious simulation of a $$RAM$$ is not merely a simulation by an oblivious RAM. In addition, we require that inputs having identical running-time on the original $$RAM$$, maintain identical running-time on the Oblivious $$RAM$$, so that the obliviously condition applies to them in a non-vacuous manner. For the sake of simplicity, we present only definition for oblivious simulation of deterministic RAMs.

Definition of $$Oblivious$$ $$Simulation$$ of $$RAM$$: Given $$probabilistic-RAM'_{k'}$$, and $$RAM_k$$,  we say that a $$probabilistic-RAM'_{k'}$$, obliviously simulates $$RAM_k$$ if the following conditions hold.

1.   The $$probabilistic-RAM'_{k'}$$ simulates $$RAM_k$$ with probability 1. In other words, for every input y, and every choice of an oracle function $f$, the output of $$oracle-RAM'_{k'}$$, on input $y$ and access to oracle $f$, equals the output of $$RAM_k$$ on input $y$.

2.   The $$probabilistic-RAM'_{k'}$$ is oblivious. (We stress that we refer here to the access pattern of $$RAM'_{K'}$$ on a fixed input and a randomly chosen oracle function.)

3.   The random variable representing the running-time of $$probabilistic-RAM'_{k'}$$  is fully specifies by the running-time $$RAM_k$$ (on input $y$).(Here again we refer to the behavior of $$RAM'_{K'}$$ on fixed input and a randomly chosen oracle function.)

Hence, the access pattern in an oblivious simulation (which is a random variable defined over the choice of the random oracle) has a distribution depending only on the running-time of the original machine. Namely, let $${\mathcal\bar{A}^k(y)}$$ denote the access pattern in an oblivious simulation of the computation of $$RAM_k$$ on input $y$. Then, $${\mathcal\bar{A}^k(y_1)}$$ and $${\mathcal\bar{A}^k(y_2)}$$ are identically distributed if the running-time of $$RAM_k$$ on these inputs (i,e., $$y_1$$ and $$y_2$$) is identical.

We note that in order to define oblivious simulations of $$oracle-RAMs$$, we have to supply the simulating $$RAM$$ with two oracles (i.e., one identical to the oracle of the simulated machine and the other being a random oracle). Besides, these two oracles can be incorporated into one, but in any case the formulation will be slightly more cumbersome.

We need to define the overhead of oblivious simulation.

Definition of Overhead of Oblivious Simulations:

Given $$probabilistic-RAM'_{k'}$$, $$RAM_k$$, and suppose that a $$probabilistic-RAM'_{k'}$$, obliviously simulates the computation of $$RAM_k$$, and let y: $$g: \Bbbk\mapsto\Bbbk$$ be a function. We say that the overhead of the simulation is at most $g$ if, for every $y$, the expected running-time of $$RAM'_{K'}$$ on input $y$ is bounded above by $$g(T)\cdot T$$, where $T$ denoted the running-time of $$RAM_k$$ on input $y$.

Computer model construction

 * Two parts: Memory and Processors
 * Internal memory of process: $$c\log\left\vert Memory \right\vert$$
 * Interaction: $$fetch(address)$$, $$store(address, value)$$
 * Processor has access to random oracle
 * Computation starts with a program and an input in $$Memory$$
 * One step: fetch one cell - update value and Processor memory - store

Oblivious execution
We want to hide orders of access to cells of $$Memory$$, thus we define Oblivious Execution as for all programs of size $m$ working in time $t$, an order of fetch/store address is the same. The weaker requirement is for all programs of size $m$ working in time $t$, order of fetch/store address has the same distribution.

Naive simulation
Cost of simulation: $$tm$$ time, $m$ memory
 * 1) We store encrypted pairs(address, value) in memory cells
 * 2) For every fetch/store we scan through all memory. If the address is wrong, re-encrypt and store the data, otherwise, do the job which means encrypt and store the results.

Square root solution
We need to protect order of accesses and number of accesses. Because $$Memory = Main Part (m + \sqrt{m}) \mid Shelter \sqrt{m}$$, the idea comes that: firstly we divide computation in epochs of $$\sqrt{m}$$ steps each. Then on each original step, we make one fetch to the $$Main Part$$ and scan through all the $$Shelter$$.

Square Root simulation Cost of simulation: $$t\sqrt{m}$$ time, $$m + 2\sqrt{m}$$ memory
 * 1) Store input in the $$Main Part$$
 * 2) Add $$\sqrt{m}$$ dummy cells to the $$Main Part$$
 * 3) for every epoch of $$\sqrt{m}$$ steps
 * 4) Permute all cells in the $$Main Part$$(using permutation $&Pi;$ from random oracle)
 * 5) For each process($i$) scan through the $$Shelter$$. If $$i-th$$ element is not founded, fetch it from $$\pi(i)$$, otherwise fetch next dummy cell
 * 6) Update(Obliviously) the $$Main Part$$ using the $$Shelter$$ values

Buffer solution
Buffer solution refers to Oblivious Hash Table. Suppose we have a memory of initial program: $$(a_1, v_1), ..., (a_m, v_m)$$ Simulation: Cost of simulation : $$t \log m$$ time, $$m \log m$$ memory
 * 1) Take a hash function h : $$[1...m]\rightarrow[1...m]$$
 * 2) Prepare $$m \times \log m$$ table
 * 3) Put $$(a_i, v_i)$$ to a random free call in $$h(a_i)-th$$ column
 * 4) The chance of overflow is less than $$1/m$$
 * 1) Construct (obliviously) a hash table
 * 2) For every step $$fetch(i)$$ of initial program
 * 3) Scan through $$h(i)$$ column
 * 4) Update the target cell

Hierarchical construction
Data structure : Oblivious Data Structure
 * $$k-buffer$$ = table $$2^k \times k$$
 * Hierarchical Buffer Structure = $$1-buffer, ..., \log t-buffer$$
 * Initial position : input in last buffer, all others are empty

Hierarchical simulation
Simulation of processing cell $i$:
 * 1) Scan through $$1-buffer$$
 * 2) For every j scan through $$h(i,j)$$-th column in $$j-buffer$$
 * 3) Put the updated value to the first buffer

Periodic rehashing
Refreshing the data structure: Invariant: For every moment of time for every $j$ buffers from 1 to $I$ all together contain at most $$2^{I-1}$$ elements.
 * 1) Every $$2^{j-1}$$ steps unify $I$-th and $$j-1$$-th buffer
 * 2) Delete doubles
 * 3) Using new hash function put all data to $$j-th$$ level

Cost of simulation: $$O(t \cdot (\log t)^3)$$ time, $$O(m \cdot (\log m)^2)$$ memory

Omitted details: realization of oblivious hashing and random oracle

Sorting network
When allowing for parallel comparators, we can trivially sort an array in O(n) time. But there are much better sorting networks. Optimally, there is a way to getting O(n logn) comparisons in O(logn) parallel time. -->