User:Simplexsigil/sandbox

Concrete Implementations of Prefixsum Algorithms
An implementation of a parallel prefix sum algorithm, like other parallel algorithms, has to take the parallelisation architecture of the platform into account. More specifically, multiple algorithms exist which are adapted for platforms working on shared memory as well as algorithms which are well suited for platforms using distributed memory, relying on message passing as the only form of inter process communication.

Shared Memory: Two-Level Algorithm
The following algorithm assumes a shared memory machine model; all processing elements (PEs) have access to the same memory. A version of this algorithm is implemented in the Multi-Core Standard Template Library (MCSTL), a parallel implementation of the C++ standard template library which provides adapted versions for parallel computing of various algorithms.

In order to concurrently calculate the prefix sum over $$n$$ data elements with $$p$$ processing elements, the data is divided into $$p+1$$ blocks, each containing $$\frac n {p+1}$$ elements (for simplicity we assume that $$p+1$$divides $$n$$).

Note, that although the algorithm divides the data into $$p+1$$ blocks, only $$p$$ processing elements run in parallel at a time.

In a first sweep, each PE calculates a local prefix sum for its block. The last block does not need to be calculated, since these prefix sums are only calculated to be used as offsets to the prefix sums of succeeding blocks and the last block is by definition not succeeded.

The $$p$$ offsets which are stored in the last position of each block are accumulated in a prefix sum of their own and stored in their succeeding positions. For $$p$$ being a small number, it is faster to do this sequentially, for a large $$p$$, this step could be done in parallel as well.

A second sweep is performed. This time the first block does not have to be processed, since it does not need to account for the offset of a preceding block. However, in this sweep the last block is included instead and the prefix sums for each block are calculated taking the prefix sum block offsets calculated in the previous sweep into account.

Distributed Memory: Hypercube Algorithm
The Hypercube Prefix Sum Algorithm is well adapted for distributed memory platforms and works with the exchange of messages between the processing elements. It assumes to have $$p=2^d$$ processor elements (PEs) participating in the algorithm equal to the number of corners in a $$d$$-dimensional hypercube.

Throughout the algorithm, each PE is seen as a corner in a hypothetical hyper cube with knowledge of the total prefix sum $$\sigma$$ as well as the prefix sum $$x$$ of all elements up to itself (according to the ordered indices among the PEs), both in its own hypercube.


 * The algorithm starts by assuming every PE is the single corner of a zero dimensional hyper cube and therefore $$\sigma$$ and $$x$$ are equal to the local prefix sum of its own elements.
 * The algorithm goes on by unifying hypercubes which are adjacent along one dimension. During each unification, $$\sigma$$ is exchanged and aggregated between the two hyper cubes which keeps the invariant that all PEs at corners of this new hyper cube store the total prefix sum of this newly unified hyper cube in their variable $$\sigma$$. However, only the hyper cube containing the PEs with higher index also adds this $$\sigma$$ to their local variable $$x$$, keeping the invariant that $$x$$ only stores the value of the prefix sum of all elements at PEs with indices smaller or equal to their own index.

In a $$d$$-dimensional hyper cube with $$2^d$$ PEs at the corners, the algorithm has to be repeated $$d$$ times to have the $$2^d$$zero-dimensional hyper cubes be unified into one $$d$$-dimensional hyper cube.

Assuming a duplex communication model where the $$\sigma$$ of two adjacent PEs in different hyper cubes can be exchanged in both directions in one communication step, this means $$d=log_2 p$$ communication startups.

Large Message Sizes: Pipelined Binary Tree
The Pipelined Binary Tree Algorithm is another algorithm for distributed memory platforms which is specifically well suited for large message sizes.

Like the hypercube algorithm, it assumes a special communication structure. The processing elements (PEs) are hypothetically arranged in a binary tree (e.g. a Fibonacci Tree) with infix numeration according to their index within the PEs. Communication on such a tree always occurs between parent and child nodes.

The infix numeration ensures that for any given PEj, the indices of all nodes reachable by its left subtree $$\mathbb{[l...j-1]}$$ are less than $$j$$ and the indices $$\mathbb{[j+1...r]}$$ of all nodes in the right subtree are greater than $$j$$. The parent's index is greater than any of the indices in PEj's subtree if PEj is a left child and smaller if PEj is a right child. This allows for the following reasoning:
 * The local prefix sum $$\mathbb{\oplus[l..j-1]}$$ of the left subtree has to be aggregated to calculate PEj's local prefix sum $$\mathbb{\oplus[l..j]}$$.
 * The local prefix sum $$\mathbb{\oplus[j+1..r]}$$ of the right subtree has to be aggregated to calculate the local prefix sum of higher level PEh which are reached on a path containing a left children connection (which means $$h > j$$).
 * The total prefix sum $$\mathbb{\oplus[0..j]}$$ of PEj is necessary to calculate the total prefix sums in the right subtree (e.g. $$\mathbb{\oplus[0..j..r]}$$ for the highest index node in the subtree).
 * PEj needs to include the total prefix sum $$\mathbb{\oplus[0..l-1]}$$ of the first higher order node which is reached via an upward path including a right children connection to calculate its total prefix sum.

Note the distinction between subtree-local and total prefix sums. The points two, three and four can lead to believe they would form a circular dependency, but this is not the case. Lower level PEs might require the total prefix sum of higher level PEs to calculate their total prefix sum, but higher level PEs only require subtree local prefix sums to calculate their total prefix sum. The root node as highest level node only requires the local prefix sum of its left subtree to calculate its own prefix sum. Each PE on the path from PE0 to the root PE only requires the local prefix sum of its left subtree to calculate its own prefix sum, whereas every node on the path from PEp-1 (last PE) to the PEroot requires the total prefix sum of its parent to calculate its own total prefix sum.

This leads to a two phase algorithm:

Upward Phase Propagate the subtree local prefix sum $$\mathbb{\oplus[l..j..r]}$$ to its parent for each PEj.

Downward phase Propagate the exclusive (exclusive PEj as well as the PEs in its left subtree) total prefix sum $$\mathbb{\oplus[0..l-1]}$$ of all lower index PEs which are not included in the addressed subtree of PEj to lower level PEs in the left child subtree of PEj. Propagate the inclusive prefix sum $$\mathbb{\oplus[0..j]}$$ to the right child subtree of PEj.

Note that the algorithm is run in parallel at each PE and the PEs will block upon receive until their children/parents provide them with packets.

Pipelining
If the message $$m$$ of length $$n$$ can be divided into $$k$$ packets and the operator ⨁ can be used on each of the corresponding message packets separately, pipelining is possible.

If the algorithm is used without pipelining, there are always only two levels (the sending PEs and the receiving PEs) of the binary tree at work while all other PEs are waiting. If there are $$p$$ processing elements and a balanced binary tree is used, the tree has $$\log _{2}p$$ levels, the length of the path from $$PE_0$$ to $$PE_\mathbb{root}$$ is therefore $$\log _{2}p - 1$$ which represents the maximum number of non parallel communication operations during the upward phase, likewise, the communication on the downward path is also limited to $$\log _{2}p -1$$ startups. Assuming a communication startup time of $$T_\mathbb{start}$$ and a bytewise transmission time of $$T_\mathbb{byte}$$, upward and downward phase are limited to $$(2\log _{2}p-2)(T_\mathbb{start} + n*T_\mathbb{byte})$$ in a non pipelined scenario.

Upon division into k packets, each of size $$\frac{n}{k}$$ and sending them separately, the first packet still needs $$(\log _{2}p-1)(T_\mathbb{start} + \frac{n}{k} *T_\mathbb{byte})$$ to be propagated to $$PE_{\mathbb{root}}$$ as part of a local prefix sum and this will occur again for the last packet if $$k > \log_{2}p$$. However, in between, all the PEs along the path can work in parallel and each third communication operation (receive left, receive right, send to parent) sends a packet to the next level, so that one phase can be completed in $$2log_{2}p-1 + 3(k-1)$$ communication operations and both phases together need $$4*log_{2}p-2 + 6(k-1)(T_\mathbb{start} + \frac{n}{k} *T_\mathbb{byte})$$ which is favourable for large message sizes $$n$$.

The algorithm can further be optimised by making use of full-duplex or telephone model communication and overlapping the upward and the downward phase.