DIIS

DIIS (direct inversion in the iterative subspace or direct inversion of the iterative subspace), also known as Pulay mixing, is a technique for extrapolating the solution to a set of linear equations by directly minimizing an error residual (e.g. a Newton–Raphson step size) with respect to a linear combination of known sample vectors. DIIS was developed by Peter Pulay in the field of computational quantum chemistry with the intent to accelerate and stabilize the convergence of the Hartree–Fock self-consistent field method.

At a given iteration, the approach constructs a linear combination of approximate error vectors from previous iterations. The coefficients of the linear combination are determined so to best approximate, in a least squares sense, the null vector. The newly determined coefficients are then used to extrapolate the function variable for the next iteration.

Details
At each iteration, an approximate error vector, $e_{i}$, corresponding to the variable value, $p_{i}$ is determined. After sufficient iterations, a linear combination of $m$ previous error vectors is constructed


 * $$\mathbf e_{m+1}=\sum_{i = 1}^m\ c_i\mathbf e_i.$$

The DIIS method seeks to minimize the norm of $e_{m+1}$ under the constraint that the coefficients sum to one. The reason why the coefficients must sum to one can be seen if we write the trial vector as the sum of the exact solution ($p^{f}$) and an error vector. In the DIIS approximation, we get:

\begin{align} \mathbf p &= \sum_i c_i \left( \mathbf p^\text{f} + \mathbf e_i \right) \\ &= \mathbf p^\text{f} \sum_i c_i + \sum_i c_i \mathbf e_i \end{align} $$ We minimize the second term while it is clear that the sum coefficients must be equal to one if we want to find the exact solution. The minimization is done by a Lagrange multiplier technique. Introducing an undetermined multiplier $λ$, a Lagrangian is constructed as



\begin{align} L&=\left\|\mathbf e_{m+1}\right\|^2-2\lambda\left(\sum_i\ c_i-1\right),\\ &=\sum_{ij}c_jB_{ji}c_i-2\lambda\left(\sum_i\ c_i-1\right),\text{ where } B_{ij}=\langle\mathbf e_j, \mathbf e_i\rangle. \end{align} $$

Equating zero to the derivatives of $L$ with respect to the coefficients and the multiplier leads to a system of $(m + 1)$ linear equations to be solved for the $m$ coefficients (and the Lagrange multiplier).


 * $$\begin{bmatrix}

B_{11} & B_{12} & B_{13} & ... & B_{1m} & -1 \\ B_{21} & B_{22} & B_{23} & ... & B_{2m} & -1 \\ B_{31} & B_{32} & B_{33} & ... & B_{3m} & -1 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ B_{m1} & B_{m2} & B_{m3} & ... & B_{mm} & -1 \\ 1     & 1      & 1      & ... & 1      & 0 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ \vdots \\ c_m \\ \lambda \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} $$

Moving the minus sign to $λ$, results in an equivalent symmetric problem.
 * $$\begin{bmatrix}

B_{11} & B_{12} & B_{13} & ... & B_{1m} & 1 \\ B_{21} & B_{22} & B_{23} & ... & B_{2m} & 1 \\ B_{31} & B_{32} & B_{33} & ... & B_{3m} & 1 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ B_{m1} & B_{m2} & B_{m3} & ... & B_{mm} & 1 \\ 1     & 1      & 1      & ... & 1      & 0 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ \vdots \\ c_m \\ -\lambda \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} $$ The coefficients are then used to update the variable as


 * $$\mathbf p_{m+1}=\sum_{i = 1}^m c_i\mathbf p_i.$$