Iterative refinement

Iterative refinement is an iterative method proposed by James H. Wilkinson to improve the accuracy of numerical solutions to systems of linear equations.

When solving a linear system $$A \mathbf{x} = \mathbf{b} \,,$$ due to the compounded accumulation of rounding errors, the computed solution $$\hat{\mathbf{x}}$$ may sometimes deviate from the exact solution $$\mathbf{x}_\star\,.$$ Starting with $$\mathbf{x}_1 = \hat{\mathbf{x}}\,,$$ iterative refinement computes a sequence $$\{\mathbf{x}_1,\, \mathbf{x}_2,\, \mathbf{x}_3,\dots\}$$ which converges to $$\mathbf{x}_\star\,,$$ when certain assumptions are met.

Description
For $$m = 1, 2, 3, \dots\,,$$ the $m$th iteration of iterative refinement consists of three steps: 1. Compute the residual error $rm$ $\mathbf{r}_m = \mathbf{b} - A \mathbf{x}_m\,.$

2. Solve the system for the correction, $cm$, that removes the residual error $A \mathbf{c}_m = \mathbf{r}_m\,.$

3. Add the correction to get the revised next solution $xm+1$ $\mathbf{x}_{m+1} = \mathbf{x}_m + \mathbf{c}_m\,.$

The crucial reasoning for the refinement algorithm is that although the solution for $cm$ in step (ii) may indeed be troubled by similar errors as the first solution, $$\hat\mathbf{x}$$, the calculation of the residual $rm$ in step (i), in comparison, is numerically nearly exact: You may not know the right answer very well, but you know quite accurately just how far the solution you have in hand is from producing the correct outcome ($b$). If the residual is small in some sense, then the correction must also be small, and should at the very least steer the current estimate of the answer, $xm$, closer to the desired one, $$\mathbf{x}_\star\,.$$

The iterations will stop on their own when the residual $rm$ is zero, or close enough to zero that the corresponding correction $cm$ is too small to change the solution $xm$ which produced it; alternatively, the algorithm stops when $rm$ is too small to convince the linear algebraist monitoring the progress that it is worth continuing with any further refinements.

Note that the matrix equation solved in step (ii) uses the same matrix $$A$$ for each iteration. If the matrix equation is solved using a direct method, such as Cholesky or LU decomposition, the numerically expensive factorization of $$A$$ is done once and is reused for the relatively inexpensive forward and back substitution to solve for $cm$ at each iteration.

Error analysis
As a rule of thumb, iterative refinement for Gaussian elimination produces a solution correct to working precision if double the working precision is used in the computation of $r$, e.g. by using quad or double extended precision IEEE 754 floating point, and if $A$ is not too ill-conditioned (and the iteration and the rate of convergence are determined by the condition number of $A$).

More formally, assuming that each step (ii) can be solved reasonably accurately, i.e., in mathematical terms, for every $m$, we have $$A \left( I + F_m \right) \mathbf{c}_m = \mathbf{r}_m$$

where $‖Fm‖&infin; < 1$, the relative error in the $m$-th iterate of iterative refinement satisfies $$\frac{ \lVert \mathbf{x}_m - \mathbf{x}_\star \rVert_\infty }{ \lVert \mathbf{x}_\star \rVert_\infty} \leq \bigl( \sigma\,\kappa(A)\,\varepsilon_1\bigr)^m + \mu_1\, \varepsilon_1 + n\, \kappa(A)\, \mu_2\, \varepsilon_2$$

where if $‖&middot;‖_{&infin;}$ is "not too badly conditioned", which in this context means
 * $&infin;$ denotes the $&kappa;(A)$-norm of a vector,
 * $&infin;$ is the $A$-condition number of $A$,
 * $n$ is the order of $A$,
 * $ε$$1$ and $ε$$2$ are unit round-offs of floating-point arithmetic operations,
 * $σ$, $μ$$1$ and $μ$$2$ are constants that depend on $A$, $ε$$1$ and $ε$$2$

and implies that $μ$$1$ and $μ$$2$ are of order unity.

The distinction of $ε$$1$ and $ε$$2$ is intended to allow mixed-precision evaluation of $0 &lt; σ &kappa;(A) ε1 ≪ 1$ where intermediate results are computed with unit round-off $ε$$2$ before the final result is rounded (or truncated) with unit round-off $ε$$1$. All other computations are assumed to be carried out with unit round-off $ε$$1$.