Biconjugate gradient stabilized method

In numerical linear algebra, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method developed by H. A. van der Vorst for the numerical solution of nonsymmetric linear systems. It is a variant of the biconjugate gradient method (BiCG) and has faster and smoother convergence than the original BiCG as well as other variants such as the conjugate gradient squared method (CGS). It is a Krylov subspace method. Unlike the original BiCG method, it doesn't require multiplication by the transpose of the system matrix.

Unpreconditioned BiCGSTAB
In the following sections, $( x, y ) = x ^{T}  y$ denotes the dot product of vectors. To solve a linear system $Ax =  b$, BiCGSTAB starts with an initial guess $x _{0}$ and proceeds as follows:

In some cases, choosing the vector $r _{0} = b  −  Ax _{0}$ randomly improves numerical stability.
 * 1) Choose an arbitrary vector $r̂ _{0}$ such that $( r̂ _{0}, r _{0}) ≠ 0$, e.g., $r̂ _{0} = r _{0}$
 * 2) For $ρ _{0} = ( r̂ _{0}, r _{0})$
 * 3) If $p _{0} = r _{0}$ is accurate enough, i.e., if s is small enough, then set $i = 1, 2, 3, …$ and quit
 * 4) If $v =  Ap _{ i −1}$ is accurate enough, i.e., if $α = ρ _{ i −1}/( r̂ _{0}, v )$ is small enough, then quit
 * 1) For $h = x _{ i −1} + αp _{ i −1}$
 * 2) If $s = r _{ i −1} − αv$ is accurate enough, i.e., if s is small enough, then set $h$ and quit
 * 3) If $x_{i} = h$ is accurate enough, i.e., if $t =  As$ is small enough, then quit
 * 1) If $ω = ( t, s )/( t, t )$ is accurate enough, i.e., if s is small enough, then set $x_{i} = h + ωs$ and quit
 * 2) If $r_{i} = s − ωt$ is accurate enough, i.e., if $x_{i}$ is small enough, then quit
 * 1) If $r_{i}$ is accurate enough, i.e., if s is small enough, then set $ρ_{i} = ( r̂ _{0}, r _{ i })$ and quit
 * 2) If $β = ( ρ_{i} / ρ _{ i −1})( α / ω )$ is accurate enough, i.e., if $p_{i} = r _{ i } + β ( p _{ i −1} − ω  v )$ is small enough, then quit
 * 1) If $r̂ _{0}$ is accurate enough, i.e., if $Ax =  b$ is small enough, then quit
 * 1) If $K =  K _{1} K _{2} ≈  A$ is accurate enough, i.e., if $x _{0}$ is small enough, then quit
 * 1) If $r _{0} = b  −  Ax _{0}$ is accurate enough, i.e., if $r̂ _{0}$ is small enough, then quit
 * 1) If $( r̂ _{0}, r _{0}) ≠ 0$ is accurate enough, i.e., if $r̂ _{0} = r _{0}$ is small enough, then quit

Preconditioned BiCGSTAB
Preconditioners are usually used to accelerate convergence of iterative methods. To solve a linear system $ρ _{0} = ( r̂ _{0}, r _{0})$ with a preconditioner $p _{0} = r _{0}$, preconditioned BiCGSTAB starts with an initial guess $i = 1, 2, 3, …$ and proceeds as follows:


 * 1) Choose an arbitrary vector $y = K  2 −1 K  1 −1 p _{ i −1}$ such that $v = Ay$, e.g., $α = ρ _{ i −1}/( r̂ _{0}, v )$
 * 2) For $h = x _{ i −1} + αy$
 * 3) If $s =  r _{ i −1} − αv$ is accurate enough then $h$ and quit
 * 4) If $x_{i} = h$ is accurate enough then quit
 * 1) For $z = K  2 −1 K  1 −1 s$
 * 2) If $t =  Az$ is accurate enough then $ω = ( K 1 −1 t, K  1 −1 s )/( K  1 −1 t ,  K  1 −1 t )$ and quit
 * 3) If $x_{i} = h + ωz$ is accurate enough then quit
 * 1) If $r_{i} = s  − ωt$ is accurate enough then $x_{i}$ and quit
 * 2) If $ρ_{i} = ( r̂ _{0}, r _{ i })$ is accurate enough then quit
 * 1) If $β = ( ρ_{i} / ρ _{ i −1})( α / ω )$ is accurate enough then $p_{i} = r _{ i } + β ( p _{ i −1} − ω  v )$ and quit
 * 2) If $Ãx̃ =  b̃$ is accurate enough then quit
 * 1) If $Ã = K  1 −1 A  K  2 −1$ is accurate enough then quit
 * 1) If $x̃ =  K _{2} x$ is accurate enough then quit
 * 1) If $b̃ = K  1 −1 b$ is accurate enough then quit
 * 1) If $p_{i}$ is accurate enough then quit
 * 1) If $p̂ _{ i }$ is accurate enough then quit
 * 1) If $r_{i}$ is accurate enough then quit

This formulation is equivalent to applying unpreconditioned BiCGSTAB to the explicitly preconditioned system

with $r̂ _{ i }$, $p_{i} = r _{ i −1} + β_{i}  p _{ i −1}$ and $p̂ _{ i } = r̂ _{ i −1} + β_{i}  p̂ _{ i −1}$. In other words, both left- and right-preconditioning are possible with this formulation.

BiCG in polynomial form
In BiCG, the search directions $r_{i} = r _{ i −1} − α_{i}  Ap _{ i }$ and $r̂ _{ i } = r̂ _{ i −1} − α_{i}  A ^{T} p̂ _{ i }$ and the residuals $α_{i}$ and $β_{i}$ are updated using the following recurrence relations:



The constants $α_{i} = ρ_{i} /( p̂ _{ i }, Ap_{i} )$ and $β_{i} = ρ_{i} / ρ _{ i −1}$ are chosen to be



where $ρ_{i} = ( r̂ _{ i −1}, r _{ i −1})$ so that the residuals and the search directions satisfy biorthogonality and biconjugacy, respectively, i.e., for $i ≠ j$,



It is straightforward to show that



where $( r̂ _{ i }, r_{j} ) = 0$ and $( p̂ _{ i }, Ap_{j} ) = 0$ are $r_{i} = P_{i} ( A ) r _{0}$th-degree polynomials in $r̂ _{ i } = P_{i} ( A ^{T}) r̂ _{0}$. These polynomials satisfy the following recurrence relations:



Derivation of BiCGSTAB from BiCG
It is unnecessary to explicitly keep track of the residuals and search directions of BiCG. In other words, the BiCG iterations can be performed implicitly. In BiCGSTAB, one wishes to have recurrence relations for



where $p _{ i +1} = T_{i} ( A ) r _{0}$ with suitable constants $p̂ _{ i +1} = T_{i} ( A ^{T}) r̂ _{0}$ instead of $P_{i} ( A )$ in the hope that $T_{i} ( A )$ will enable faster and smoother convergence in $i$ than $A$.

It follows from the recurrence relations for $P_{i} ( A ) = P _{ i −1}( A ) − α_{i}AT _{ i −1}( A )$ and $T_{i} ( A ) = P_{i} ( A ) + β _{ i +1} T _{ i −1}( A )$ and the definition of $r̃ _{ i } = Q_{i} ( A ) P_{i} ( A ) r _{0}$ that



which entails the necessity of a recurrence relation for $Q_{i} ( A ) = ( I − ω _{1} A )( I  − ω _{2} A )⋯( I  − ω_{i}A )$. This can also be derived from the BiCG relations:



Similarly to defining $ω_{j}$, BiCGSTAB defines



Written in vector form, the recurrence relations for $r_{i} = P_{i} ( A ) r_{0}$ and $Q_{i} ( A )$ are



To derive a recurrence relation for $r̃_{i}$, define



The recurrence relation for $r_{i}$ can then be written as



which corresponds to



Determination of BiCGSTAB constants
Now it remains to determine the BiCG constants $P_{i} ( A )$ and $T_{i} ( A )$ and choose a suitable $Q_{i} ( A )$.

In BiCG, $Q_{i} ( A ) P_{i} ( A ) r _{0} = ( I − ω_{i}A )( Q _{ i −1}( A ) P _{ i −1}( A ) r _{0}  − α_{i}AQ _{ i −1}( A ) T _{ i −1}( A ) r _{0})$ with



Since BiCGSTAB does not explicitly keep track of $Q_{i} ( A ) T_{i} ( A ) r _{0}$ or $Q_{i} ( A ) T_{i} ( A ) r _{0} = Q_{i} ( A ) P_{i} ( A ) r _{0} + β _{ i +1}( I − ω_{i}A ) Q _{ i −1}( A ) P _{ i −1}( A ) r _{0}$, $r̃_{i}$ is not immediately computable from this formula. However, it can be related to the scalar



Due to biorthogonality, $p̃ _{ i +1} = Q_{i} ( A ) T_{i} ( A ) r _{0}$ is orthogonal to $p̃ _{ i }$ where $r̃ _{ i }$ is any polynomial of degree $p̃ _{ i } = r̃ _{ i −1} + β_{i} ( I  − ω _{ i −1} A ) p̃ _{ i −1}$ in $r̃ _{ i } = ( I − ω_{i}A )( r̃ _{ i −1} − α_{i}A  p̃ _{ i })$. Hence, only the highest-order terms of $x_{i}$ and $s_{i} = r̃ _{ i −1} − α_{i}A  p̃ _{ i }$ matter in the dot products $r̃ _{ i }$ and $r̃ _{ i } = r̃ _{ i −1} − α_{i}A  p̃ _{ i } − ω_{i}As_{i}$. The leading coefficients of $x _{ i } = x _{ i −1} + α_{i}  p̃ _{ i } + ω_{i}s_{i}$ and $α_{i}$ are $β_{i}$ and $ω_{i}$, respectively. It follows that



and thus



A simple formula for $β_{i} = ρ_{i} / ρ _{ i −1}$ can be similarly derived. In BiCG,



Similarly to the case above, only the highest-order terms of $ρ_{i} = ( r̂ _{ i −1}, r _{ i −1}) = ( P _{ i −1}( A ^{T}) r̂ _{0}, P _{ i −1}( A ) r _{0})$ and $r̂ _{ i }$ matter in the dot products thanks to biorthogonality and biconjugacy. It happens that $r _{ i }$ and $ρ_{i}$ have the same leading coefficient. Thus, they can be replaced simultaneously with $ρ̃ _{ i } = ( Q _{ i −1}( A ^{T}) r̂ _{0}, P _{ i −1}( A ) r _{0}) = ( r̂ _{0}, Q _{ i −1}( A ) P _{ i −1}( A ) r _{0}) = ( r̂ _{0}, r _{ i −1})$ in the formula, which leads to



Finally, BiCGSTAB selects $r _{ i −1} = P _{ i −1}( A ) r _{0}$ to minimize $U _{ i −2}( A ^{T}) r̂ _{0}$ in $U _{ i −2}( A ^{T})$-norm as a function of $i − 2$. This is achieved when



giving the optimal value



Generalization
BiCGSTAB can be viewed as a combination of BiCG and GMRES where each BiCG step is followed by a GMRES($A ^{T}$) (i.e., GMRES restarted at each step) step to repair the irregular convergence behavior of CGS, as an improvement of which BiCGSTAB was developed. However, due to the use of degree-one minimum residual polynomials, such repair may not be effective if the matrix $P _{ i −1}( A ^{T})$ has large complex eigenpairs. In such cases, BiCGSTAB is likely to stagnate, as confirmed by numerical experiments.

One may expect that higher-degree minimum residual polynomials may better handle this situation. This gives rise to algorithms including BiCGSTAB2 and the more general BiCGSTAB($Q _{ i −1}( A ^{T})$). In BiCGSTAB($( P _{ i −1}( A ^{T}) r̂ _{0}, P _{ i −1}( A ) r _{0})$), a GMRES($( Q _{ i −1}( A ^{T}) r̂ _{0}, P _{ i −1}( A ) r _{0})$) step follows every $P _{ i −1}( A ^{T})$ BiCG steps. BiCGSTAB2 is equivalent to BiCGSTAB($Q _{ i −1}( A ^{T})$) with $(−1)^{ i −1} α _{1} α _{2}⋯ α _{ i −1}$.