Minimal polynomial (linear algebra)

In linear algebra, the minimal polynomial $μ_{A}$ of an $n&thinsp;×&thinsp;n$ matrix $A$ over a field $F$ is the monic polynomial $P$ over $F$ of least degree such that $P(A) = 0$. Any other polynomial $Q$ with $Q(A) = 0$ is a (polynomial) multiple of $μ_{A}$.

The following three statements are equivalent:
 * 1) $λ$ is a root of $μ_{A}$,
 * 2) $λ$ is a root of the characteristic polynomial $χ_{A}$ of $A$,
 * 3) $λ$ is an eigenvalue of matrix $A$.

The multiplicity of a root $λ$ of $μ_{A}$ is the largest power $m$ such that $ker((A − λI_{n})^{m})$ strictly contains $ker((A − λI_{n})^{m−1})$. In other words, increasing the exponent up to $m$ will give ever larger kernels, but further increasing the exponent beyond $m$ will just give the same kernel.

If the field $F$ is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in $F$) alone, in other words they may have irreducible polynomial factors of degree greater than $1$. For irreducible polynomials $P$ one has similar equivalences:
 * 1) $P$ divides $μ_{A}$,
 * 2) $P$ divides $χ_{A}$,
 * 3) the kernel of $P(A)$ has dimension at least $1$.
 * 4) the kernel of $P(A)$ has dimension at least $deg(P)$.

Like the characteristic polynomial, the minimal polynomial does not depend on the base field. In other words, considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason for this differs from the case with the characteristic polynomial (where it is immediate from the definition of determinants), namely by the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of $A$: extending the base field will not introduce any new such relations (nor of course will it remove existing ones).

The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if $A$ is a multiple $aI_{n}$ of the identity matrix, then its minimal polynomial is $X − a$ since the kernel of $aI_{n} − A = 0$ is already the entire space; on the other hand its characteristic polynomial is $(X − a)^{n}$ (the only eigenvalue is $a$, and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field).

Formal definition
Given an endomorphism $T$ on a finite-dimensional vector space $V$ over a field $F$, let $I_{T}$ be the set defined as
 * $$ \mathit{I}_T = \{ p \in \mathbf{F}[t] \mid p(T) = 0 \} ,$$

where $F[t&hairsp;]$ is the space of all polynomials over the field $F$. $I_{T}$ is a proper ideal of $F[t&hairsp;]$. Since $F$ is a field, $F[t&hairsp;]$ is a principal ideal domain, thus any ideal is generated by a single polynomial, which is unique up to a unit in $F$. A particular choice among the generators can be made, since precisely one of the generators is monic. The minimal polynomial is thus defined to be the monic polynomial that generates $I_{T}$. It is the monic polynomial of least degree in $I_{T}$.

Applications
An endomorphism $φ$ of a finite-dimensional vector space over a field $F$ is diagonalizable if and only if its minimal polynomial factors completely over $F$ into distinct linear factors. The fact that there is only one factor $X − λ$ for every eigenvalue $λ$ means that the generalized eigenspace for $λ$ is the same as the eigenspace for $λ$: every Jordan block has size $1$. More generally, if $φ$ satisfies a polynomial equation $P(φ) = 0$ where $P$ factors into distinct linear factors over $F$, then it will be diagonalizable: its minimal polynomial is a divisor of $P$ and therefore also factors into distinct linear factors. In particular one has:


 * $P = X^{&thinsp;k} − 1$: finite order endomorphisms of complex vector spaces are diagonalizable. For the special case $k = 2$ of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than $2$, since $X^{&thinsp;2} − 1 = (X − 1)(X + 1)$ is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups.
 * $P = X^{&thinsp;2} − X = X(X − 1)$: endomorphisms satisfying $φ^{2} = φ$ are called projections, and are always diagonalizable (moreover their only eigenvalues are $0$ and $1$).
 * By contrast if $μ_{φ} = X^{&thinsp;k}$ with $k ≥ 2$ then $φ$ (a nilpotent endomorphism) is not necessarily diagonalizable, since $X^{&thinsp;k}$ has a repeated root $0$.

These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof.

Computation
For a nonzero vector $v$ in $V$ define:


 * $$ \mathit{I}_{T, v} = \{ p \in \mathbf{F}[t] \; | \; p(T)(v) = 0 \}.$$

This definition satisfies the properties of a proper ideal. Let $μ_{T,v}$ be the monic polynomial which generates it.

Example
Define $d$ to be the endomorphism of $I_{T,v}$ with matrix, on the canonical basis,
 * $$\begin{pmatrix} 1 & -1 & -1 \\ 1 & -2 & 1 \\ 0 & 1 & -3 \end{pmatrix}.$$

Taking the first canonical basis vector $μ_{T}$ and its repeated images by $T$ one obtains


 * $$   e_1 = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad

T \cdot e_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}. \quad T^2\! \cdot e_1 = \begin{bmatrix} 0 \\ -1 \\ 1 \end{bmatrix} \mbox{ and}\quad T^3\! \cdot e_1 = \begin{bmatrix} 0 \\ 3 \\ -4 \end{bmatrix}$$ of which the first three are easily seen to be linearly independent, and therefore span all of $μ_{T,v}$. The last one then necessarily is a linear combination of the first three, in fact

so that:

This is in fact also the minimal polynomial $v, T(v), ..., T^{d}(v)$ and the characteristic polynomial $a_{0}, a_{1}, ..., a_{d−1}$&hairsp;: indeed $F$ divides $μ_{T,v&thinsp;}(T&hairsp;)$ which divides $μ_{T,v&thinsp;}(T&hairsp;)$, and since the first and last are of degree $v, T(v), ..., T^{&thinsp;d−1}(v)$ and all are monic, they must all be the same. Another reason is that in general if any polynomial in $W$ annihilates a vector $d$, then it also annihilates $μ_{T}$ (just apply $Q$ to the equation that says that it annihilates $T$), and therefore by iteration it annihilates the entire space generated by the iterated images by $W$ of $W$; in the current case we have seen that for $μ_{T,v}$ that space is all of $0$, so $Q = 1$. Indeed one verifies for the full matrix that $μ_{T} = μ_{T,v}$ is the zero matrix:


 * $$\begin{bmatrix} 0 & 1 & -3 \\ 3 & -13 & 23 \\ -4 & 19 & -36 \end{bmatrix}

+ 4\begin{bmatrix} 0 & 0 & 1 \\ -1 & 4 & -6 \\ 1 & -5 & 10 \end{bmatrix} + \begin{bmatrix} 1 & -1 & -1 \\ 1 & -2 & 1 \\ 0 & 1 & -3 \end{bmatrix} + \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$