SPIKE algorithm

The SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh

Overview
The SPIKE algorithm deals with a linear system $AX =  F$, where $A$ is a banded $$n\times n$$ matrix of bandwidth much less than $$n$$, and $F$ is an $$n\times s$$ matrix containing $$s$$ right-hand sides. It is divided into a preprocessing stage and a postprocessing stage.

Preprocessing stage
In the preprocessing stage, the linear system $AX =  F$ is partitioned into a block tridiagonal form

\begin{bmatrix} \boldsymbol{A}_1 & \boldsymbol{B}_1\\ \boldsymbol{C}_2 & \boldsymbol{A}_2 & \boldsymbol{B}_2\\ & \ddots & \ddots & \ddots\\ & & \boldsymbol{C}_{p-1} & \boldsymbol{A}_{p-1} & \boldsymbol{B}_{p-1}\\ & & & \boldsymbol{C}_p & \boldsymbol{A}_p \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1\\ \boldsymbol{X}_2\\ \vdots\\ \boldsymbol{X}_{p-1}\\ \boldsymbol{X}_p \end{bmatrix} = \begin{bmatrix} \boldsymbol{F}_1\\ \boldsymbol{F}_2\\ \vdots\\ \boldsymbol{F}_{p-1}\\ \boldsymbol{F}_p \end{bmatrix}. $$

Assume, for the time being, that the diagonal blocks $A _{ j }$ ($j = 1,..., p$ with $p &ge; 2$) are nonsingular. Define a block diagonal matrix

then $D = diag( A _{1},..., A _{ p })$ is also nonsingular. Left-multiplying $D$ to both sides of the system gives

\begin{bmatrix} \boldsymbol{I} & \boldsymbol{V}_1\\ \boldsymbol{W}_2 & \boldsymbol{I} & \boldsymbol{V}_2\\ & \ddots & \ddots & \ddots\\ & & \boldsymbol{W}_{p-1} & \boldsymbol{I} & \boldsymbol{V}_{p-1}\\ & & & \boldsymbol{W}_p & \boldsymbol{I} \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1\\ \boldsymbol{X}_2\\ \vdots\\ \boldsymbol{X}_{p-1}\\ \boldsymbol{X}_p \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1\\ \boldsymbol{G}_2\\ \vdots\\ \boldsymbol{G}_{p-1}\\ \boldsymbol{G}_p \end{bmatrix}, $$

which is to be solved in the postprocessing stage. Left-multiplication by $D ^{−1}$ is equivalent to solving $$p$$ systems of the form

(omitting $D ^{−1}$ and $A _{ j }[ V _{ j } W _{ j }  G _{ j }] = [ B _{ j }  C _{ j }  F _{ j }]$ for $$j=1$$, and $W _{1}$ and $C _{1}$ for $$j=p$$), which can be carried out in parallel.

Due to the banded nature of $V _{ p }$, only a few leftmost columns of each $B _{ p }$ and a few rightmost columns of each $A$ can be nonzero. These columns are called the spikes.

Postprocessing stage
Without loss of generality, assume that each spike contains exactly $$m$$ columns ($$m$$ is much less than $$n$$) (pad the spike with columns of zeroes if necessary). Partition the spikes in all $V _{ j }$ and $W _{ j }$ into



\begin{bmatrix} \boldsymbol{V}_j^{(t)}\\ \boldsymbol{V}_j'\\ \boldsymbol{V}_j^{(b)} \end{bmatrix} $$ and $$ \begin{bmatrix} \boldsymbol{W}_j^{(t)}\\ \boldsymbol{W}_j'\\ \boldsymbol{W}_j^{(b)}\\ \end{bmatrix} $$

where $V _{ j }$, $W _{ j }$, $V j (t)$ and $V j (b)$ are of dimensions $$m\times m$$. Partition similarly all $W j (t)$ and $W j (b)$ into



\begin{bmatrix} \boldsymbol{X}_j^{(t)}\\ \boldsymbol{X}_j'\\ \boldsymbol{X}_j^{(b)} \end{bmatrix} $$ and $$ \begin{bmatrix} \boldsymbol{G}_j^{(t)}\\ \boldsymbol{G}_j'\\ \boldsymbol{G}_j^{(b)}\\ \end{bmatrix}. $$

Notice that the system produced by the preprocessing stage can be reduced to a block pentadiagonal system of much smaller size (recall that $$m$$ is much less than $$n$$)



\begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{(t)}\\ & \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)} & \boldsymbol{0} \\ & & \ddots & \ddots & \ddots & \ddots & \ddots\\ & & & \boldsymbol{0} & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{p-1}^{(t)}\\ & & & & \boldsymbol{W}_{p-1}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)} & \boldsymbol{0}\\ & & & & & \boldsymbol{0} & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & & & & & & \boldsymbol{W}_p^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)}\\ \vdots\\ \boldsymbol{X}_{p-1}^{(t)}\\ \boldsymbol{X}_{p-1}^{(b)}\\ \boldsymbol{X}_p^{(t)}\\ \boldsymbol{X}_p^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)}\\ \vdots\\ \boldsymbol{G}_{p-1}^{(t)}\\ \boldsymbol{G}_{p-1}^{(b)}\\ \boldsymbol{G}_p^{(t)}\\ \boldsymbol{G}_p^{(b)} \end{bmatrix}\text{,} $$

which we call the reduced system and denote by $X _{ j }$.

Once all $G _{ j }$ and $S&#x303;X&#x303; =  G&#x303;$ are found, all $X j (t)$ can be recovered with perfect parallelism via



\begin{cases} \boldsymbol{X}_1'=\boldsymbol{G}_1'-\boldsymbol{V}_1'\boldsymbol{X}_2^{(t)}\text{,}\\ \boldsymbol{X}_j'=\boldsymbol{G}_j'-\boldsymbol{V}_j'\boldsymbol{X}_{j+1}^{(t)}-\boldsymbol{W}_j'\boldsymbol{X}_{j-1}^{(b)}\text{,} & j=2,\ldots,p-1\text{,}\\ \boldsymbol{X}_p'=\boldsymbol{G}_p'-\boldsymbol{W}_p\boldsymbol{X}_{p-1}^{(b)}\text{.} \end{cases} $$

SPIKE as a polyalgorithmic banded linear system solver
Despite being logically divided into two stages, computationally, the SPIKE algorithm comprises three stages: Each of these stages can be accomplished in several ways, allowing a multitude of variants. Two notable variants are the recursive SPIKE algorithm for non-diagonally-dominant cases and the truncated SPIKE algorithm for diagonally-dominant cases. Depending on the variant, a system can be solved either exactly or approximately. In the latter case, SPIKE is used as a preconditioner for iterative schemes like Krylov subspace methods and iterative refinement.
 * 1) factorizing the diagonal blocks,
 * 2) computing the spikes,
 * 3) solving the reduced system.

Preprocessing stage
The first step of the preprocessing stage is to factorize the diagonal blocks $X j (b)$. For numerical stability, one can use LAPACK's  routines to LU factorize them with partial pivoting. Alternatively, one can also factorize them without partial pivoting but with a "diagonal boosting" strategy. The latter method tackles the issue of singular diagonal blocks.

In concrete terms, the diagonal boosting strategy is as follows. Let $X ′_{ j }$ denote a configurable "machine zero". In each step of LU factorization, we require that the pivot satisfy the condition



If the pivot does not satisfy the condition, it is then boosted by



\mathrm{pivot}= \begin{cases} \mathrm{pivot}+\epsilon\lVert\boldsymbol{A}_j\rVert_1 & \text{if }\mathrm{pivot}\geq 0\text{,}\\ \mathrm{pivot}-\epsilon\lVert\boldsymbol{A}_j\rVert_1 & \text{if }\mathrm{pivot}<0 \end{cases} $$

where $A _{ j }$ is a positive parameter depending on the machine's unit roundoff, and the factorization continues with the boosted pivot. This can be achieved by modified versions of ScaLAPACK's  routines. After the diagonal blocks are factorized, the spikes are computed and passed on to the postprocessing stage.

The two-partition case
In the two-partition case, i.e., when $0_{ &epsilon; }$, the reduced system $|pivot| &gt; 0_{ &epsilon; }‖ A ‖_{1}$ has the form



\begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)} \end{bmatrix}\text{.} $$

An even smaller system can be extracted from the center:



\begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)} \end{bmatrix}\text{,} $$

which can be solved using the block LU factorization



\begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m \end{bmatrix} = \begin{bmatrix} \boldsymbol{I}_m\\ \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ & \boldsymbol{I}_m-\boldsymbol{W}_2^{(t)}\boldsymbol{V}_1^{(b)} \end{bmatrix}\text{.} $$

Once $&epsilon;$ and $p = 2$ are found, $S&#x303;X&#x303; =  G&#x303;$ and $X 1 (b)$ can be computed via



The multiple-partition case
Assume that $X 2 (t)$ is a power of two, i.e., $X 1 (t)$. Consider a block diagonal matrix



where



\boldsymbol{\tilde{D}}_k^{[1]}= \begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{2k-1}^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{2k-1}^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_{2k}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & \boldsymbol{W}_{2k}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} $$

for $X 2 (b)$. Notice that $X 1 (t) = G  1 (t) &minus;  V  1 (t) X  2 (t)$ essentially consists of diagonal blocks of order $X 2 (b) = G  2 (b) &minus;  W  2 (b) X  1 (b)$ extracted from $p$. Now we factorize $p = 2^{ d }$ as



The new matrix $D&#x303; _{1} = diag( D&#x303; 1 [1],..., D&#x303; p /2 [1])$ has the form



\begin{bmatrix} \boldsymbol{I}_{3m} & \boldsymbol{0} & \boldsymbol{V}_1^{[2](t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{[2](b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{[2](t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{[2](t)}\\ & \boldsymbol{W}_2^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_{3m} & \boldsymbol{V}_2^{[2](b)} & \boldsymbol{0} \\ & & \ddots & \ddots & \ddots & \ddots & \ddots\\ & & & \boldsymbol{0} & \boldsymbol{W}_{p/2-1}^{[2](t)} & \boldsymbol{I}_{3m} & \boldsymbol{0} & \boldsymbol{V}_{p/2-1}^{[2](t)}\\ & & & & \boldsymbol{W}_{p/2-1}^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p/2-1}^{[2](b)} & \boldsymbol{0}\\ & & & & & \boldsymbol{0} & \boldsymbol{W}_{p/2}^{[2](t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & & & & & & \boldsymbol{W}_{p/2}^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_{3m} \end{bmatrix}\text{.} $$

Its structure is very similar to that of $k = 1,..., p /2$, only differing in the number of spikes and their height (their width stays the same at $D&#x303; _{1}$). Thus, a similar factorization step can be performed on $4 m$ to produce



and



Such factorization steps can be performed recursively. After $S&#x303;$ steps, we obtain the factorization



where $S&#x303;$ has only two spikes. The reduced system will then be solved via



The block LU factorization technique in the two-partition case can be used to handle the solving steps involving $S&#x303; =  D&#x303; _{1} S&#x303; _{2}$, ..., $S&#x303; _{2}$ and $S&#x303; _{2}$ for they essentially solve multiple independent systems of generalized two-partition forms.

Generalization to cases where $m$ is not a power of two is almost trivial.

Truncated SPIKE
When $S&#x303; _{2}$ is diagonally-dominant, in the reduced system



\begin{bmatrix} \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\ \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{(t)}\\ & \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)} & \boldsymbol{0} \\ & & \ddots & \ddots & \ddots & \ddots & \ddots\\ & & & \boldsymbol{0} & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{p-1}^{(t)}\\ & & & & \boldsymbol{W}_{p-1}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)} & \boldsymbol{0}\\ & & & & & \boldsymbol{0} & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\ & & & & & & \boldsymbol{W}_p^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)}\\ \vdots\\ \boldsymbol{X}_{p-1}^{(t)}\\ \boldsymbol{X}_{p-1}^{(b)}\\ \boldsymbol{X}_p^{(t)}\\ \boldsymbol{X}_p^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)}\\ \vdots\\ \boldsymbol{G}_{p-1}^{(t)}\\ \boldsymbol{G}_{p-1}^{(b)}\\ \boldsymbol{G}_p^{(t)}\\ \boldsymbol{G}_p^{(b)} \end{bmatrix}\text{,} $$

the blocks $S&#x303; _{2} = D&#x303; _{2} S&#x303; _{3}$ and $S&#x303; =  D&#x303; _{1} D&#x303; _{2} S&#x303; _{3}$ are often negligible. With them omitted, the reduced system becomes block diagonal



\begin{bmatrix} \boldsymbol{I}_m\\ & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\ & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m\\ & & & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)}\\ & & & \ddots & \ddots & \ddots\\ & & & & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m\\ & & & & & & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)}\\ & & & & & & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m\\ & & & & & & & & \boldsymbol{I}_m \end{bmatrix} \begin{bmatrix} \boldsymbol{X}_1^{(t)}\\ \boldsymbol{X}_1^{(b)}\\ \boldsymbol{X}_2^{(t)}\\ \boldsymbol{X}_2^{(b)}\\ \vdots\\ \boldsymbol{X}_{p-1}^{(t)}\\ \boldsymbol{X}_{p-1}^{(b)}\\ \boldsymbol{X}_p^{(t)}\\ \boldsymbol{X}_p^{(b)} \end{bmatrix} = \begin{bmatrix} \boldsymbol{G}_1^{(t)}\\ \boldsymbol{G}_1^{(b)}\\ \boldsymbol{G}_2^{(t)}\\ \boldsymbol{G}_2^{(b)}\\ \vdots\\ \boldsymbol{G}_{p-1}^{(t)}\\ \boldsymbol{G}_{p-1}^{(b)}\\ \boldsymbol{G}_p^{(t)}\\ \boldsymbol{G}_p^{(b)} \end{bmatrix} $$

and can be easily solved in parallel.

The truncated SPIKE algorithm can be wrapped inside some outer iterative scheme (e.g., BiCGSTAB or iterative refinement) to improve the accuracy of the solution.

SPIKE for tridiagonal systems
The first SPIKE partitioning and algorithm was presented in and was designed as the means to improve the stability properties of a parallel Givens rotations-based solver for tridiagonal systems. A version of the algorithm, termed g-Spike, that is based on serial Givens rotations applied independently on each block was designed for the NVIDIA GPU. A SPIKE-based algorithm for the GPU that is based on a special block diagonal pivoting strategy is described in.

SPIKE as a preconditioner
The SPIKE algorithm can also function as a preconditioner for iterative methods for solving linear systems. To solve a linear system $d &minus; 1$ using a SPIKE-preconditioned iterative solver, one extracts center bands from $S&#x303; =  D&#x303; _{1}⋯ D&#x303; _{ d &minus;1} S&#x303; _{ d }$ to form a banded preconditioner $S&#x303; _{ d }$ and solves linear systems involving $X&#x303; = S&#x303;  d &minus;1 D&#x303;  d &minus;1 &minus;1⋯ D&#x303;  1 &minus;1 G&#x303;$ in each iteration with the SPIKE algorithm.

In order for the preconditioner to be effective, row and/or column permutation is usually necessary to move "heavy" elements of $D&#x303; _{1}$ close to the diagonal so that they are covered by the preconditioner. This can be accomplished by computing the weighted spectral reordering of $D&#x303; _{ d &minus;1}$.

The SPIKE algorithm can be generalized by not restricting the preconditioner to be strictly banded. In particular, the diagonal block in each partition can be a general matrix and thus handled by a direct general linear system solver rather than a banded solver. This enhances the preconditioner, and hence allows better chance of convergence and reduces the number of iterations.

Implementations
Intel offers an implementation of the SPIKE algorithm under the name Intel Adaptive Spike-Based Solver. Tridiagonal solvers have also been developed for the NVIDIA GPU and the Xeon Phi co-processors. The method in  is the basis for a tridiagonal solver in the cuSPARSE library. The Givens rotations based solver was also implemented for the GPU and the Intel Xeon Phi.