Compound matrix

In linear algebra, a branch of mathematics, a (multiplicative) compound matrix is a matrix whose entries are all minors, of a given size, of another matrix. Compound matrices are closely related to exterior algebras, and their computation appears in a wide array of problems, such as in the analysis of nonlinear time-varying dynamical systems and generalizations of positive systems, cooperative systems and contracting systems.

Definition
Let $A$ be an $m&thinsp;×&thinsp;n$ matrix with real or complex entries. If $I$ is a subset of size $r$ of ${1, ..., m }$ and $J$ is a subset of size $s$ of ${1, ..., n }$, then the $(I, J&hairsp;)$-submatrix of $A$, written $A_{I, J}$&hairsp;, is the submatrix formed from $A$ by retaining only those rows indexed by $I$ and those columns indexed by $J$. If $r = s$, then $det&thinsp;A_{I, J}$ is the $(I, J&hairsp;)$-minor of $A$.

The r&hairsp;th compound matrix of $A$ is a matrix, denoted $C_{r&thinsp;}(A)$, is defined as follows. If $r > min(m, n)$, then $C_{r&thinsp;}(A)$ is the unique $0&thinsp;×&thinsp;0$ matrix. Otherwise, $C_{r&thinsp;}(A)$ has size $\binom{m}{r} \!\times\! \binom{n}{r}$. Its rows and columns are indexed by $r$-element subsets of ${1, ..., m }$ and ${1, ..., n }$, respectively, in their lexicographic order. The entry corresponding to subsets $I$ and $J$ is the minor $det&thinsp;A_{I, J}$.

In some applications of compound matrices, the precise ordering of the rows and columns is unimportant. For this reason, some authors do not specify how the rows and columns are to be ordered.

For example, consider the matrix
 * $$A = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \end{pmatrix}.$$

The rows are indexed by ${1, 2, 3 }$ and the columns by ${1, 2, 3, 4 }$. Therefore, the rows of $C_{2&hairsp;}(A)$ are indexed by the sets
 * $$\{1, 2\} < \{1, 3\} < \{2, 3\}$$

and the columns are indexed by
 * $$\{1, 2\} < \{1, 3\} < \{1, 4\} < \{2, 3\} < \{2, 4\} < \{3, 4\}.$$

Using absolute value bars to denote determinants, the second compound matrix is
 * $$\begin{align}

C_2(A) &= \begin{pmatrix} \left|\begin{smallmatrix} 1 & 2 \\ 5 & 6 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 1 & 3 \\ 5 & 7 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 1 & 4 \\ 5 & 8 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 2 & 3 \\ 6 & 7 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 2 & 4 \\ 6 & 8 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 3 & 4 \\ 7 & 8 \end{smallmatrix}\right| \\ \left|\begin{smallmatrix} 1 & 2 \\ 9 & 10 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 1 & 3 \\ 9 & 11 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 1 & 4 \\ 9 & 12 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 2 & 3 \\ 10 & 11 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 2 & 4 \\ 10 & 12 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 3 & 4 \\ 11 & 12 \end{smallmatrix}\right| \\ \left|\begin{smallmatrix} 5 & 6 \\ 9 & 10 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 5 & 7 \\ 9 & 11 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 5 & 8 \\ 9 & 12 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 6 & 7 \\ 10 & 11 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 6 & 8 \\ 10 & 12 \end{smallmatrix}\right| & \left|\begin{smallmatrix} 7 & 8 \\ 11 & 12 \end{smallmatrix}\right| \end{pmatrix} \\ &= \begin{pmatrix} -4 & -8 & -12 & -4 & -8 & -4 \\ -8 & -16 & -24 & -8 & -16 & -8 \\ -4 & -8 & -12 & -4 & -8 & -4 \end{pmatrix}. \end{align}$$

Properties
Let $c$ be a scalar, $A$ be an $m&thinsp;×&thinsp;n$ matrix, and $B$ be an $n&thinsp;×&thinsp;p$ matrix. For $k$ a positive integer, let $I_{k}$ denote the $k&thinsp;×&thinsp;k$ identity matrix. The transpose of a matrix $M$ will be written $M$, and the conjugate transpose by $M$. Then:


 * $C_{0&hairsp;}(A) = I_{1}$, a $1&thinsp;×&thinsp;1$ identity matrix.
 * If $C_{1}(A) = A$, then $C_{r&thinsp;}(cA) = cC_{r&thinsp;}(A)$.
 * If $rk A = r$, then $$C_r(I_n) = I_{\binom{n}{r}}$$.
 * If $rk C_{r&thinsp;}(A) = 1$, then $1&thinsp;≤ r ≤ n$.
 * If $1&thinsp;≤ r ≤ min(m, n)$, then $C_{r&thinsp;}(A) = C_{r&thinsp;}(A)$.
 * $1&thinsp;≤ r ≤ min(m, n)$, which is closely related to Cauchy–Binet formula.
 * If $C_{r&thinsp;}(A^{*}) = C_{r&thinsp;}(A)^{*}$, then $C_{r&thinsp;}(AB) = C_{r&thinsp;}(A)&hairsp;C_{r&thinsp;}(B)$.
 * $A$, which is closely related to Cauchy–Binet formula.

Assume in addition that $n$ is a square matrix of size $C_{n&hairsp;}(A) = det&thinsp;A$. Then:


 * If $A$ has one of the following properties, then so does $C_{r&thinsp;}(A)$:
 * Upper triangular,
 * Lower triangular,
 * Diagonal,
 * Orthogonal,
 * Unitary,
 * Symmetric,
 * Hermitian,
 * Skew-symmetric (when r is odd),
 * Skew-hermitian (when r is odd),
 * Positive definite,
 * Positive semi-definite,
 * Normal.
 * If $A$ is invertible, then so is $C_{r&thinsp;}(A)$, and $C_{r&thinsp;}(A) = C_{r&thinsp;}(A)$.
 * (Sylvester–Franke theorem) If $1&thinsp;≤ r ≤ n$, then $$\det C_r(A) = (\det A)^{\binom{n-1}{r-1}}$$.
 * (Sylvester–Franke theorem) If $R^{n}$, then $$\det C_r(A) = (\det A)^{\binom{n-1}{r-1}}$$.

Relation to exterior powers
Give $e_{1}, ..., e_{n}$ the standard coordinate basis $r$. The $R^{n}$&hairsp;th exterior power of $A$ is the vector space
 * $$\wedge^r \mathbf{R}^n$$

whose basis consists of the formal symbols
 * $$\mathbf{e}_{i_1} \wedge \dots \wedge \mathbf{e}_{i_r},$$

where
 * $$i_1 < \dots < i_r.$$

Suppose that $m&thinsp;×&thinsp;n$ is an $A$ matrix. Then $r$ corresponds to a linear transformation
 * $$A \colon \mathbf{R}^n \to \mathbf{R}^m.$$

Taking the $C_{r&thinsp;}(A)$&hairsp;th exterior power of this linear transformation determines a linear transformation
 * $$\wedge^r A \colon \wedge^r \mathbf{R}^n \to \wedge^r \mathbf{R}^m.$$

The matrix corresponding to this linear transformation (with respect to the above bases of the exterior powers) is $C_{r&thinsp;}(AB) = C_{r&thinsp;}(A)C_{r&thinsp;}(B)$. Taking exterior powers is a functor, which means that
 * $$\wedge^r (AB) = (\wedge^r A)(\wedge^r B).$$

This corresponds to the formula $A$. It is closely related to, and is a strengthening of, the Cauchy–Binet formula.

Relation to adjugate matrices
Let $n&thinsp;×&thinsp;n$ be an $adj_{r&hairsp;}(A)$ matrix. Recall that its $r$&hairsp;th higher adjugate matrix $(I, J&hairsp;)$ is the $\binom{n}{r} \!\times\! \binom{n}{r}$ matrix whose $K$ entry is
 * $$(-1)^{\sigma(I) + \sigma(J)} \det A_{J^c, I^c},$$

where, for any set $σ(K)$ of integers, $K$ is the sum of the elements of $A$. The adjugate of $adj(A)$ is its 1st higher adjugate and is denoted $A$. The generalized Laplace expansion formula implies
 * $$C_r(A)\operatorname{adj}_r(A) = \operatorname{adj}_r(A)C_r(A) = (\det A)I_{\binom{n}{r}}.$$

If $S$ is invertible, then
 * $$\operatorname{adj}_r(A^{-1}) = (\det A)^{-1}C_r(A).$$

A concrete consequence of this is Jacobi's formula for the minors of an inverse matrix:
 * $$\det(A^{-1})_{J^c, I^c} = (-1)^{\sigma(I) + \sigma(J)} \frac{\det A_{I,J}}{\det A}.$$

Adjugates can also be expressed in terms of compounds. Let $J$ denote the sign matrix:
 * $$S = \operatorname{diag}(1, -1, 1, -1, \ldots, (-1)^{n-1}),$$

and let $r$ denote the exchange matrix:
 * $$J = \begin{pmatrix} & & 1 \\ & \cdots & \\ 1 & & \end{pmatrix}.$$

Then Jacobi's theorem states that the $A$&hairsp;th higher adjugate matrix is:
 * $$\operatorname{adj}_r(A) = JC_{n-r}(SAS)^TJ.$$

It follows immediately from Jacobi's theorem that
 * $$C_r(A)\, J(C_{n-r}(SAS))^TJ = (\det A)I_{\binom{n}{r}}.$$

Taking adjugates and compounds does not commute. However, compounds of adjugates can be expressed using adjugates of compounds, and vice versa. From the identities
 * $$C_r(C_s(A))C_r(\operatorname{adj}_s(A)) = (\det A)^rI,$$
 * $$C_r(C_s(A))\operatorname{adj}_r(C_s(A)) = (\det C_s(A))I,$$

and the Sylvester-Franke theorem, we deduce
 * $$\operatorname{adj}_r(C_s(A)) = (\det A)^{\binom{n-1}{s-1}-r} C_r(\operatorname{adj}_s(A)).$$

The same technique leads to an additional identity,
 * $$\operatorname{adj}(C_r(A)) = (\det A)^{\binom{n-1}{r-1}-r} C_r(\operatorname{adj}(A)).$$

Compound and adjugate matrices appear when computing determinants of linear combinations of matrices. It is elementary to check that if $B$ and $n&thinsp;×&thinsp;n$ are ᙭᙭᙭ matrices then
 * $$\det(sA + tB) = C_n\!\left(\begin{bmatrix} sA & I_n \end{bmatrix}\right)C_n\!\left(\begin{bmatrix} I_n \\ tB \end{bmatrix}\right).$$

It is also true that:
 * $$\det(sA + tB) = \sum_{r=0}^n s^r t^{n-r} \operatorname{tr}(\operatorname{adj}_r(A)C_r(B)).$$

This has the immediate consequence
 * $$\det(I + A) = \sum_{r=0}^n \operatorname{tr} \operatorname{adj}_r(A) = \sum_{r=0}^n \operatorname{tr} C_r(A).$$

Numerical computation
In general, the computation of compound matrices is non-effective due to its high complexity. Nonetheless, there are some efficient algorithms available for real matrices with special structure.