Partial inverse of a matrix

In linear algebra and statistics, the partial inverse of a matrix is an operation related to Gaussian elimination which has applications in numerical analysis and statistics. It is also known by various authors as the principal pivot transform, or as the sweep, gyration, or exchange operator.

Given an $$ n \times n $$ matrix $$ A$$ over a vector space $$V$$ partitioned into blocks:


 * $$ A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} $$

If $$A_{11}$$ is invertible, then the partial inverse of $$A$$ around the pivot block $$A_{11}$$ is created by inverting $$A_{11}$$, putting the Schur complement $$A / A_{11}$$ in place of $$A_{22}$$, and adjusting the off-diagonal elements accordingly:


 * $$ \operatorname{inv}_1 A = \begin{pmatrix} (A_{11})^{-1} & - (A_{11})^{-1} A_{12} \\ A_{21} (A_{11})^{-1} & A_{22} - A_{21}  (A_{11})^{-1}A_{12} \end{pmatrix} $$

Conceptually, partial inversion corresponds to a rotation of the graph of the matrix $$ (X, AX) \in V \times V$$, such that, for conformally-partitioned column matrices $$(x_1, x_2)^T$$ and $$(y_1, y_2)^T$$:


 * $$ A \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \end{pmatrix} \Leftrightarrow

\operatorname{inv}_1(A) \begin{pmatrix} y_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} x_1 \\ y_2 \end{pmatrix} $$

As defined this way, this operator is its own inverse: $$ \operatorname{inv}_k(\operatorname{inv}_k(A)) = A $$, and if the pivot block $$A_{11}$$ is chosen to be the entire matrix, then the transform simply gives the matrix inverse $$A^{-1}$$. Note that some authors define a related operation (under one of the other names) which is not an inverse per se; particularly, one common definition instead has $$(\operatorname{inv}_k)^2 (A) = -A$$.

The transform is often presented as a pivot around a single non-zero element $$a_{kk}$$, in which case one has



\left[ \operatorname{inv}_k (A) \right]_{ij} = \begin{cases} \frac{1}{a_{kk}} & i = j = k \\ -\frac{a_{kj}}{a_{kk}} & i = k, j \neq k \\ \frac{a_{ik}}{a_{kk}} & i \neq k, j = k \\ a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}} & i \neq k, j \neq k \end{cases} $$

Partial inverses obey a number of nice properties:
 * inversions around different blocks commute, so larger pivots may be built up from sequences of smaller ones
 * partial inversion preserves the space of symmetric matrices

Use of the partial inverse in numerical analysis is due to the fact that there is some flexibility in the choices of pivots, allowing for non-invertible elements to be avoided, and because the operation of rotation (of the graph of the pivoted matrix) has better numerical stability than the shearing operation which is implicitly performed by Gaussian elimination. Use in statistics is due to the fact that the resulting matrix nicely decomposes into blocks which have useful meanings in the context of linear regression.