Jordan–Chevalley decomposition

In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part is potentially diagonalisable and the other is nilpotent. The two parts are polynomials in the operator, which makes them behave nicely in algebraic manipulations.

The decomposition has a short description when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than are needed for the existence of a Jordan normal form. Hence the Jordan–Chevalley decomposition can be seen as a generalisation of the Jordan normal form, which is also reflected in several proofs of it.

It is closely related to the Wedderburn principal theorem about associative algebras, which also leads to several analogues in Lie algebras. Analogues of the Jordan–Chevalley decomposition also exist for elements of Linear algebraic groups and Lie groups via a multiplicative reformulation. The decomposition is an important tool in the study of all of these objects, and was developed for this purpose.

In many texts, the potentially diagonalisable part is also characterised as the semisimple part.

Introduction
A basic question in linear algebra is whether an operator on a finite-dimensional vector space can be diagonalised. For example, this is closely related to the eigenvalues of the operator. In several contexts, one may be dealing with many operators which are not diagonalisable. Even over an algebraically closed field, a diagonalisation may not exist. In this context, the Jordan normal form achieves the best possible result akin to a diagonalisation. For linear operators over a field which is not algebraically closed, there may be no eigenvector at all. This latter point is not the main concern dealt with by the Jordan–Chevalley decomposition. To avoid this problem, instead potentially diagonalisable operators are considered, which are those that admit a diagonalisation over some field (or equivalently over the algebraic closure of the field under consideration).

The operators which are "the furthest away" from being diagonalisable are nilpotent operators. An operator (or more generally an element of a ring) $$ x $$ is said to be nilpotent when there is some positive integer $$ m \geq 1 $$ such that $$ x^m = 0 $$. In several contexts in abstract algebra, it is the case that the presence of nilpotent elements of a ring make them much more complicated to work with. To some extent, this is also the case for linear operators. The Jordan–Chevalley decomposition "separates out" the nilpotent part of an operator which causes it to be not potentially diagonalisable. So when it exists, the complications introduced by nilpotent operators and their interaction with other operators can be understood using the Jordan–Chevalley decomposition.

Historically, the Jordan–Chevalley decomposition was motivated by the applications to the theory of Lie algebras and linear algebraic groups, as described in sections below.

Decomposition of a linear operator
Let $$ K $$ be a field, $$ V $$ a finite-dimensional vector space over $$ K $$, and $$ T $$ a linear operator over $$ V $$ (equivalently, a matrix with entries from $$ K $$). If the minimal polynomial of $$ T $$ splits over $$ K $$ (for example if $$ K $$ is algebraically closed), then $$ T $$ has a Jordan normal form $$ T = SJS^{-1} $$. If $$ D $$ is the diagonal of $$ J $$, let $$ R = J - D $$ be the remaining part. Then $$ T = SDS^{-1} + SRS^{-1} $$ is a decomposition where $$ SDS^{-1} $$ is diagonalisable and $$ SRS^{-1} $$ is nilpotent. This restatement of the normal form as an additive decomposition not only makes the numerical computation more stable, but can be generalised to cases where the minimal polynomial of $$ T $$ does not split.

If the minimal polynomial of $$ T $$ splits into distinct linear factors, then $$ T $$ is diagonalisable. Therefore, if the minimal polynomial of $$ T $$ is at least separable, then $$ T $$ is potentially diagonalisable. The Jordan–Chevalley decomposition is concerned with the more general case where the minimal polynomial of $$ T $$ is a product of separable polynomials.

Let $$ x: V \to V $$ be any linear operator on the finite-dimensional vector space $$ V $$ over the field $$ K $$. A Jordan–Chevalley decomposition of $$ x $$ is an expression of it as a sum
 * $$ x = x_s + x_n $$ ,

where $$ x_s $$ is potentially diagonalisable, $$ x_n $$ is nilpotent, and $$ x_s x_n = x_n x_s $$.

$$

Several proofs are discussed in. Two arguments are also described below.

If $$ K $$ is a perfect field, then every polynomial is a product of separable polynomials (since every polynomial is a product of its irreducible factors, and these are separable over a perfect field). So in this case, the Jordan–Chevalley decomposition always exists. Moreover, over a perfect field, a polynomial is separable if and only if it is square-free. Therefore an operator is potentially diagonalisable if and only if its minimal polynomial is square-free. In general (over any field), the minimal polynomial of a linear operator is square-free if and only if the operator is semisimple. (In particular, the sum of two commuting semisimple operators is always semisimple over a perfect field. The same statement is not true over general fields.) The property of being semisimple is more relevant than being potentially diagonalisable in most contexts where the Jordan–Chevalley decomposition is applied, such as for Lie algebras. For these reasons, many texts restrict to the case of perfect fields.

Proof of uniqueness and necessity
That $$ x_s $$ and $$ x_n $$ are polynomials in $$ x $$ implies in particular that they commute with any operator that commutes with $$ x $$. This observation underlies the uniqueness proof.

Let $$x = x_s + x_n$$ be a Jordan–Chevalley decomposition in which $$ x_s $$ and (hence also) $$ x_n $$ are polynomials in $$ x $$. Let $$x = x_s' + x_n'$$ be any Jordan–Chevalley decomposition. Then $$x_s - x_s' = x_n' - x_n$$, and $$x_s', x_n'$$ both commute with $$ x $$, hence with $$x_s, x_n$$ since these are polynomials in $$x$$. The sum of commuting nilpotent operators is again nilpotent, and the sum of commuting potentially diagonalisable operators again potentially diagonalisable (because they are simultaneously diagonalizable over the algebraic closure of $$ K $$). Since the only operator which is both potentially diagonalisable and nilpotent is the zero operator it follows that $$x_s - x_s' = 0 = x_n - x_n'$$.

To show that the condition that $$ x $$ have a minimal polynomial which is a product of separable polynomials is necessary, suppose that $$ x = x_s + x_n $$ is some Jordan–Chevalley decomposition. Letting $$ p $$ be the separable minimal polynomial of $$ x_s $$, one can check using the binomial theorem that $$ p(x_s + x_n) $$ can be written as $$ x_n y $$ where $$ y $$ is some polynomial in $$ x_s, x_n $$. Moreover, for some $$ \ell \geq 1 $$, $$ x_n^\ell = 0 $$. Thus $$ p(x)^\ell = x_n^\ell y^\ell = 0 $$ and so the minimal polynomial of $$ x $$ must divide $$ p^\ell $$. As $$ p^\ell $$ is a product of separable polynomials (namely of copies of $$ p $$), so is the minimal polynomial.

Concrete example for non-existence
If the ground field is not perfect, then a Jordan–Chevalley decomposition may not exist, as it is possible that the minimal polynomial is not a product of separable polynomials. The simplest such example is the following. Let $$ p $$ be a prime number, let $$k$$ be an imperfect field of characteristic $$p,$$ (e. g. $$ k = \mathbb{F}_p(t) $$) and choose $$a \in k$$ that is not a $$p$$th power. Let $$V = k[X]/\left(X^p - a\right)^2,$$ let $$x = \overline X$$ be the image in the quotient and let $$T$$ be the $$k$$-linear operator given by multiplication by $$x$$ in $$V$$. Note that the minimal polynomial is precisely $$ \left(X^p - a\right)^2 $$, which is inseparable and a square. By the necessity of the condition for the Jordan–Chevalley decomposition (as shown in the last section), this operator does not have a Jordan–Chevalley decomposition. It can be instructive to see concretely why there is at least no decomposition into a square-free and a nilpotent part.

Note that $$ T $$ has as its invariant $$k$$-linear subspaces precisely the ideals of $$V$$ viewed as a ring, which correspond to the ideals of $$k[X]$$ containing $$\left(X^p - a\right)^2$$. Since $$X^p - a$$ is irreducible in $$k[X],$$ ideals of $$ V $$ are $$0,$$ $$V$$ and $$J = \left(x^p - a\right)V.$$ Suppose $$T = S + N$$ for commuting $$k$$-linear operators $$S$$ and $$N$$ that are respectively semisimple (just over $$k$$, which is weaker than semisimplicity over an algebraic closure of $$k$$ and also weaker than being potentially diagonalisable) and nilpotent. Since $$S$$ and $$N$$ commute, they each commute with $$T = S + N$$ and hence each acts $$k[x]$$-linearly on $$V$$. Therefore $$S$$ and $$N$$ are each given by multiplication by respective members of $$V$$ $$s = S(1)$$ and $$n = N(1),$$ with $$s + n = T(1) = x$$. Since $$N$$ is nilpotent, $$n$$ is nilpotent in $$V,$$ therefore $$\overline n = 0$$ in $$V/J,$$ for $$V/J$$ is a field. Hence, $$n\in J,$$ therefore $$n = \left(x^p - a\right)h(x)$$ for some polynomial $$h(X) \in k[X]$$. Also, we see that $$n^2 = 0$$. Since $$k$$ is of characteristic $$p,$$ we have $$x^p = s^p + n^p = s^p$$. On the other hand, since $$\overline x = \overline s$$ in $$A/J,$$ we have $$h\left(\overline s\right) = h\left(\overline x\right),$$ therefore $$h(s) - h(x)\in J$$ in $$V.$$ Since $$\left(x^p - a\right)J = 0,$$ we have $$\left(x^p - a\right)h(x) = \left(x^p - a\right)h(s).$$ Combining these results we get $$x = s + n = s + \left(s^p - a\right)h(s).$$ This shows that $$s$$ generates $$V$$ as a $$k$$-algebra and thus the $$S$$-stable $$k$$-linear subspaces of $$V$$ are ideals of $$V,$$ i.e. they are $$0,$$ $$J$$ and $$V.$$ We see that $$J$$ is an $$S$$-invariant subspace of $$V$$ which has no complement $$S$$-invariant subspace, contrary to the assumption that $$S$$ is semisimple. Thus, there is no decomposition of $$T$$ as a sum of commuting $$k$$-linear operators that are respectively semisimple and nilpotent.

If instead of with the polynomial $$ \left(X^p - a\right)^2 $$,  the same construction is performed with $$ {X^p} - a $$, the resulting operator $$ T $$ still does not admit a Jordan–Chevalley decomposition by the main theorem. However, $$ T $$ is semi-simple. The trivial decomposition $$ T = T + 0 $$ hence expresses $$ T $$ as a sum of a semisimple and a nilpotent operator, both of which are polynomials in $$ T $$.

Elementary proof of existence
This construction is similar to Hensel's lemma in that it uses an algebraic analogue of Taylor's theorem to find an element with a certain algebraic property via a variant of Newton's method. In this form, it is taken from.

Let $$ x $$ have minimal polynomial $$ p $$ and assume this is a product of separable polynomials. This condition is equivalent to demanding that there is some separable $$ q $$ such that $$ q \mid p $$ and $$ p \mid q^m $$ for some $$ m \geq 1 $$. By the Bézout lemma, there are polynomials $$u$$ and $$v$$ such that $${uq+{vq'}}=1$$. This can be used to define a recursion $$x_{n+1} = x_n - v(x_n)q(x_n)$$, starting with $$x_0 = x$$. Letting $$\mathfrak{X}$$ be the algebra of operators which are polynomials in $$x$$, it can be checked by induction that for all $$n$$:


 * $$x_n \in \mathfrak{X}$$ because in each step, a polynomial is applied,
 * $$x_n - x \in q(x) \cdot \mathfrak{X}$$ because $$x_{n+1} - x = (x_{n+1} - x_n) + (x_n - x)$$ and both terms are in $$q(x) \cdot \mathfrak{X}$$ by induction hypothesis,
 * $$q(x_n) \in q(x)^{2^n} \cdot \mathfrak{X}$$ because $$q(x_{n+1}) = q(x_n) + q'(x_n) (x_{n+1} - x_n) + (x_{n+1} - x_n)^2 h$$ for some $$h \in \mathfrak{X}$$ (by the algebraic version of Taylor's theorem). By definition of $$x_{n+1}$$ as well as of $$u$$ and $$v$$, this simplifies to $$q(x_{n+1}) = q(x_n)^2 (u(x_n) + v(x_n)^2  h) $$, which indeed lies in $$q(x)^{2^{n+1}} \cdot \mathfrak{X} $$ by induction hypothesis.

Thus, as soon as $$2^n \geq m $$, $$q(x_n) = 0 $$ by the third point since $$p \mid q^m$$ and $$p(x) = 0$$, so the minimal polynomial of $$x_n $$ will divide $$q $$ and hence be separable. Moreover, $$x_n $$ will be a polynomial in $$x $$ by the first point and $$x_n - x $$ will be nilpotent by the second point (in fact, $$(x_n - x)^m=0 $$). Therefore, $$x = x_n + (x - x_n) $$ is then the Jordan–Chevalley decomposition of $$x $$. Q.E.D.

This proof, besides being completely elementary, has the advantage that it is algorithmic: By the Cayley–Hamilton theorem, $$p $$ can be taken to be the characteristic polynomial of $$x $$, and in many contexts, $$q $$ can be determined from $$ p $$. Then $$v $$ can be determined using the Euclidean algorithm. The iteration of applying the polynomial $$vq $$ to the matrix then can be performed until either $$v(x_n) q(x_n) = 0 $$ (because then all later values will be equal) or $$2^n $$ exceeds the dimension of the vector space on which $$x $$ is defined (where $$n $$ is the number of iteration steps performed, as above).

Proof of existence via Galois theory
This proof, or variants of it, is commonly used to establish the Jordan–Chevalley decomposition. It has the advantage that it is very direct and describes quite precisely how close one can get to a Jordan–Chevalley decomposition: If $$L $$ is the splitting field of the minimal polynomial of $$x $$ and $$G $$ is the group of automorphisms of $$L $$ that fix the base field $$K $$, then the set $$F $$ of elements of $$L $$ that are fixed by all elements of $$G $$ is a field with inclusions $$K \subseteq F \subseteq L $$ (see Galois correspondence). Below it is argued that $$x $$ admits a Jordan–Chevalley decomposition over $$F $$, but not any smaller field. This argument does not use Galois theory. However, Galois theory is required deduce from this the condition for the existence of the Jordan-Chevalley given above.

Above it was observed that if $$ x $$ has a Jordan normal form (i. e. if the minimal polynomial of $$ x $$ splits), then it has a Jordan Chevalley decomposition. In this case, one can also see directly that $$ x_n $$ (and hence also $$ x_s $$) is a polynomial in $$ x $$. Indeed, it suffices to check this for the decomposition of the Jordan matrix $$ J = D + R $$. This is a technical argument, but does not require any tricks beyond the Chinese remainder theorem.

In the Jordan normal form, we have written $$ V = \bigoplus_{i = 1}^r V_i $$ where $$ r $$ is the number of Jordan blocks and $$ x |_{V_i} $$ is one Jordan block. Now let $$f(t) = \operatorname{det}(t I - x)$$ be the characteristic polynomial of $$ x $$. Because $$ f $$ splits, it can be written as $$ f(t) = \prod_{i=1}^r (t - \lambda_i)^{d_i} $$, where $$ r $$ is the number of Jordan blocks, $$ \lambda_i $$ are the distinct eigenvalues, and $$ d_i $$ are the sizes of the Jordan blocks, so $$ d_i = \dim V_i $$. Now, the Chinese remainder theorem applied to the polynomial ring $$k[t]$$ gives a polynomial $$p(t)$$ satisfying the conditions
 * $$p(t) \equiv 0 \bmod t,\, p(t) \equiv \lambda_i \bmod (t - \lambda_i)^{d_i}$$ (for all i).

(There is a redundancy in the conditions if some $$\lambda_i$$ is zero but that is not an issue; just remove it from the conditions.) The condition $$p(t) \equiv \lambda_i \bmod (t - \lambda_i)^{d_i}$$, when spelled out, means that $$p(t) - \lambda_i = g_i(t) (t - \lambda_i)^{d_i}$$ for some polynomial $$g_i(t)$$. Since $$(x - \lambda_i I)^{d_i}$$ is the zero map on $$V_i$$, $$p(x)$$ and $$x_s$$ agree on each $$V_i$$; i.e., $$p(x) = x_s$$. Also then $$q(x) = x_n$$ with $$q(t) = t - p(t)$$. The condition $$p(t) \equiv 0 \bmod t$$ ensures that $$p(t)$$ and $$q(t)$$ have no constant terms. This completes the proof of the theorem in case the minimal polynomial of $$ x $$ splits.

This fact can be used to deduce the Jordan–Chevalley decomposition in the general case. Let $$ L $$ be the splitting field of the minimal polynomial of $$ x $$, so that $$ x $$ does admit a Jordan normal form over $$ L $$. Then, by the argument just given, $$ x $$ has a Jordan–Chevalley decomposition $$ x = {c(x)} + {(x - {c(x)})} $$ where $$ c $$ is a polynomial with coefficients from $$ L $$, $$c(x) $$ is diagonalisable (over $$ L $$) and $$ x - c(x) $$ is nilpotent.

Let $$ \sigma $$ be a field automorphism of $$ L $$ which fixes $$ K $$. Then $$c(x) + (x-{c(x)}) = x = {\sigma(x)} = {\sigma({c(x)})} + {\sigma(x- {c(x)} )}$$ Here $$\sigma(c(x)) = \sigma(c)(x)$$ is a polynomial in $$x$$, so is $$ x - c(x) $$. Thus, $$\sigma(c(x))$$ and $$\sigma(x - c(x))$$ commute. Also, $$ \sigma (c(x)) $$ is potentially diagonalisable and $$ \sigma({x - c(x)}) $$ is nilpotent. Thus, by the uniqueness of the Jordan–Chevalley decomposition (over $$L$$), $$\sigma(c(x)) = c(x)$$ and $$\sigma(c(x)) = c(x)$$. Therefore, by definition, $$ x_s, x_n $$ are endomorphisms (represented by matrices) over $$ F $$. Finally, since $$\left\{1, x, x^2, \dots\right\}$$ contains an $$L$$-basis that spans the space containing $$x_s, x_n$$, by the same argument, we also see that $$ c $$ has coefficients in $$ F $$. Q.E.D.

If the minimal polynomial of $$ x $$ is a product of separable polynomials, then the field extension $$ L/K $$ is Galois, meaning that $$ F = K $$.

Separable algebras
The Jordan–Chevalley decomposition is very closely related to the Wedderburn principal theorem in the following formulation:

Usually, the term „separable“ in this theorem refers to the general concept of a separable algebra and the theorem might then be established as a corollary of a more general high-powered result. However, if it is instead interpreted in the more basic sense that every element have a separable minimal polynomial, then this statement is essentially equivalent to the Jordan–Chevalley decomposition as described above. This gives a different way to view the decomposition, and for instance takes this route for establishing it.

To see how the Jordan–Chevalley decomposition follows from the Wedderburn principal theorem, let $$ V $$ be a finite-dimensional vector space over the field $$ K $$, $$x : V \to V$$ an endomorphism with a minimal polynomial which is a product of separable polynomials and $$A = K[x] \subset \operatorname{End}(V)$$ the subalgebra generated by $$ x $$. Note that $$ A $$ is a commutative Artinian ring, so $$ J $$ is also the nilradical of $$ A $$. Moreover, $$A/J$$ is separable, because if $$ a \in A $$, then for minimal polynomial $$ p $$, there is a separable polynomial $$ q $$ such that $$ q \mid p $$ and $$ p \mid q^m $$ for some $$ m \geq 1 $$. Therefore $$ q(a) \in J $$, so the minimal polynomial of the image $$ a + J \in A/J $$ divides $$ q $$, meaning that it must be separable as well (since a divisor of a separable polynomial is separable). There is then the vector-space decomposition $$ A = B \oplus J $$ with $$ B $$ separable. In particular, the endomorphism $$ x $$ can be written as $$x = x_s + x_n$$ where $$x_s \in B$$ and $$x_n \in J$$. Moreover, both elements are, like any element of $$ A $$, polynomials in $$ x $$.

Conversely, the Wedderburn principal theorem in the formulation above is a consequence of the Jordan–Chevalley decomposition. If $$ A $$ has a separable subalgebra $$ B $$ such that $$ A = B \oplus J $$, then $$ A/J \cong B $$ is separable. Conversely, if $$ A/J $$ is separable, then any element of $$ A $$ is a sum of a separable and a nilpotent element. As shown above in, this implies that the minimal polynomial will be a product of separable polynomials. Let $$ x \in A $$ be arbitrary, define the operator $$ T_x: A \to A, a \mapsto ax $$, and note that this has the same minimal polynomial as $$ x $$. So it admits a Jordan–Chevalley decomposition, where both operators are polynomials in $$ T_x $$, hence of the form $$ T_s, T_n $$ for some $$ s, n \in A $$ which have separable and nilpotent minimal polynomials, respectively. Moreover, this decomposition is unique. Thus if $$ B $$ is the subalgebra of all separable elements (that this is a subalgebra can be seen by recalling that $$ s $$ is separable if and only if $$ T_s $$ is potentially diagonalisable), $$ A = B \oplus J $$ (because $$ J $$ is the ideal of nilpotent elements). The algebra $$ B \cong A/J $$ is separable and semisimple by assumption.

Over perfect fields, this result simplifies. Indeed, $$ A/J $$ is then always separable in the sense of minimal polynomials: If $$ a \in A $$, then the minimal polynomial $$ p $$ is a product of separable polynomials, so there is a separable polynomial $$ q $$ such that $$ q \mid p $$ and $$ p \mid q^m $$ for some $$ m \geq 1 $$. Thus $$ q(a) \in J $$. So in $$ A/J $$, the minimal polynomial of $$ a + J $$ divides $$ q $$ and is hence separable. The crucial point in the theorem is then not that $$ A/J $$ is separable (because that condition is vacuous), but that it is semisimple, meaning its radical is trivial.

The same statement is true for Lie algebras, but only in characteristic zero. This is the content of Levi’s theorem. (Note that the notions of semisimple in both results do indeed correspond, because in both cases this is equivalent to being the sum of simple subalgebras or having trivial radical, at least in the finite-dimensional case.)

Preservation under representations
The crucial point in the proof for the Wedderburn principal theorem above is that an element $$ x \in A $$ corresponds to a linear operator $$ T_x: A \to A $$ with the same properties. In the theory of Lie algebras, this corresponds to the adjoint representation of a Lie algebra $$ \mathfrak{g} $$. This decomposed operator has a Jordan–Chevalley decomposition $$\operatorname{ad}(x) = \operatorname{ad}(x)_s + \operatorname{ad}(x)_n$$. Just as in the associative case, this corresponds to a decomposition of $$ x $$, but polynomials are not available as a tool. One context in which this does makes sense is the restricted case where $$ \mathfrak{g} $$ is contained in the Lie algebra $$ \mathfrak{gl}(V) $$ of the endomorphisms of a finite-dimensional vector space $$ V $$ over the perfect field $$ K $$. Indeed, any semisimple Lie algebra can be realised in this way.

If $$x = x_s + x_n$$ is the Jordan decomposition, then $$\operatorname{ad}(x) = \operatorname{ad}(x_s) + \operatorname{ad}(x_n)$$ is the Jordan decomposition of the adjoint endomorphism $$\operatorname{ad}(x)$$ on the vector space $$\mathfrak{g}$$. Indeed, first, $$\operatorname{ad}(x_s)$$ and $$\operatorname{ad}(x_n)$$ commute since $$[\operatorname{ad}(x_s), \operatorname{ad}(x_n)] = \operatorname{ad}([x_s, x_n]) = 0$$. Second, in general, for each endomorphism $$y \in \mathfrak{g}$$, we have:


 * 1) If $$y^m = 0$$, then $$\operatorname{ad}(y)^{2m-1} = 0$$, since $$\operatorname{ad}(y)$$ is the difference of the left and right multiplications by y.
 * 2) If $$y$$ is semisimple, then $$\operatorname{ad}(y)$$ is semisimple, since semisimple is equivalent to potentially diagonalisable over a perfect field (if $$ y $$ is diagonal over the basis $$ \{b_1, \dots, b_n \} $$, then $$ \operatorname{ad}(y)  $$ is diagonal over the basis consisting of the maps $$ M_{ij} $$ with $$ b_i \mapsto b_j $$ and $$ b_k \mapsto 0 $$ for $$ k \neq 0 $$).

Hence, by uniqueness, $$\operatorname{ad}(x)_s = \operatorname{ad}(x_s)$$ and $$\operatorname{ad}(x)_n = \operatorname{ad}(x_n)$$.

The adjoint representation is a very natural and general representation of any Lie algebra. The argument above illustrates (and indeed proves) a general principle which generalises this: If $$\pi: \mathfrak{g} \to \mathfrak{gl}(V)$$ is any finite-dimensional representation of a semisimple finite-dimensional Lie algebra over a perfect field, then $$\pi$$ preserves the Jordan decomposition in the following sense: if $$x = x_s + x_n$$, then $$\pi(x_s) = \pi(x)_s$$ and $$\pi(x_n) = \pi(x)_n$$.

Nilpotency criterion
The Jordan decomposition can be used to characterize nilpotency of an endomorphism. Let k be an algebraically closed field of characteristic zero, $$E = \operatorname{End}_\mathbb{Q}(k)$$ the endomorphism ring of k over rational numbers and V a finite-dimensional vector space over k. Given an endomorphism $$x : V \to V$$, let $$x = s + n$$ be the Jordan decomposition. Then $$s$$ is diagonalizable; i.e., $V = \bigoplus V_i$ where each $$V_i$$ is the eigenspace for eigenvalue $$\lambda_i$$ with multiplicity $$m_i$$. Then for any $$\varphi\in E$$ let $$\varphi(s) : V \to V$$ be the endomorphism such that $$\varphi(s) : V_i \to V_i$$ is the multiplication by $$\varphi(\lambda_i)$$. Chevalley calls $$\varphi(s)$$ the replica of $$s$$ given by $$\varphi$$. (For example, if $$k = \mathbb{C}$$, then the complex conjugate of an endomorphism is an example of a replica.) Now,

Proof: First, since $$n \varphi(s)$$ is nilpotent,
 * $$0 = \operatorname{tr}(x\varphi(s)) = \sum_i \operatorname{tr}\left(s\varphi(s) | V_i\right) = \sum_i m_i \lambda_i\varphi(\lambda_i)$$.

If $$\varphi$$ is the complex conjugation, this implies $$\lambda_i = 0$$ for every i. Otherwise, take $$\varphi$$ to be a $$\mathbb{Q}$$-linear functional $$\varphi : k \to \mathbb{Q}$$ followed by $$\mathbb{Q} \hookrightarrow k$$. Applying that to the above equation, one gets:
 * $$\sum_i m_i \varphi(\lambda_i)^2 = 0$$

and, since $$\varphi(\lambda_i)$$ are all real numbers, $$\varphi(\lambda_i) = 0$$ for every i. Varying the linear functionals then implies $$\lambda_i = 0$$ for every i. $$\square$$

A typical application of the above criterion is the proof of Cartan's criterion for solvability of a Lie algebra. It says: if $$\mathfrak{g} \subset \mathfrak{gl}(V)$$ is a Lie subalgebra over a field k of characteristic zero such that $$\operatorname{tr}(xy) = 0$$ for each $$x \in \mathfrak{g}, y \in D \mathfrak{g} = [\mathfrak{g}, \mathfrak{g}]$$, then $$\mathfrak{g}$$ is solvable.

Proof: Without loss of generality, assume k is algebraically closed. By Lie's theorem and Engel's theorem, it suffices to show for each $$x \in D \mathfrak g$$, $$x$$ is a nilpotent endomorphism of V. Write $x = \sum_i [x_i, y_i]$. Then we need to show:
 * $$\operatorname{tr}(x \varphi(s)) = \sum_i \operatorname{tr}([x_i, y_i] \varphi(s)) = \sum_i \operatorname{tr}(x_i [y_i, \varphi(s)])$$

is zero. Let $$\mathfrak{g}' = \mathfrak{gl}(V)$$. Note we have: $$\operatorname{ad}_{\mathfrak{g}'}(x) : \mathfrak{g} \to D \mathfrak{g}$$ and, since $$\operatorname{ad}_{\mathfrak{g}'}(s)$$ is the semisimple part of the Jordan decomposition of $$\operatorname{ad}_{\mathfrak{g}'}(x)$$, it follows that $$\operatorname{ad}_{\mathfrak{g}'}(s)$$ is a polynomial without constant term in $$\operatorname{ad}_{\mathfrak{g}'}(x)$$; hence, $$\operatorname{ad}_{\mathfrak{g}'}(s) : \mathfrak{g} \to D \mathfrak{g}$$ and the same is true with $$\varphi(s)$$ in place of $$s$$. That is, $$[\varphi(s), \mathfrak{g}] \subset D \mathfrak{g}$$, which implies the claim given the assumption. $$\square$$

Real semisimple Lie algebras
In the formulation of Chevalley and Mostow, the additive decomposition states that an element X in a real semisimple Lie algebra g with Iwasawa decomposition g = k ⊕ a ⊕ n  can be written as the sum of three commuting elements of the Lie algebra X = S + D + N, with S, D and N conjugate to elements in k, a and n respectively. In general the terms in the Iwasawa decomposition do not commute.

Multiplicative decomposition
If $$ x $$ is an invertible linear operator, it may be more convenient to use a multiplicative Jordan–Chevalley decomposition. This expresses $$ x $$ as a product
 * $$ x = x_s \cdot x_u $$,

where $$ x_s $$ is potentially diagonalisable, and $$ x_u - 1 $$ is nilpotent (one also says that $$ x_u $$ is unipotent).

The multiplicative version of the decomposition follows from the additive one since, as $$x_s$$ is invertible (because the sum of an invertible operator and a nilpotent operator is invertible)
 * $$x = x_s + x_n = x_s\left(1 + x_s^{-1}x_n\right)$$

and $$1 + x_s^{-1}x_n$$ is unipotent. (Conversely, by the same type of argument, one can deduce the additive version from the multiplicative one.)

The multiplicative version is closely related to decompositions encountered in a linear algebraic group. For this it is again useful to assume that the underlying field $$ K $$ is perfect because then the Jordan–Chevalley decomposition exists for all matrices.

Linear algebraic groups
Let $$G$$ be a linear algebraic group over a perfect field. Then, essentially by definition, there is a closed embedding $$G \hookrightarrow \mathbf{GL}_n$$. Now, to each element $$g \in G$$, by the multiplicative Jordan decomposition, there are a pair of a semisimple element $$g_s$$ and a unipotent element $$g_u$$ a priori in $$\mathbf{GL}_n$$ such that $$g = g_s g_u = g_u g_s$$. But, as it turns out, the elements $$g_s, g_u$$ can be shown to be in $$G$$ (i.e., they satisfy the defining equations of G) and that they are independent of the embedding into $$\mathbf{GL}_n$$; i.e., the decomposition is intrinsic.

When G is abelian, $$G$$ is then the direct product of the closed subgroup of the semisimple elements in G and that of unipotent elements.

Real semisimple Lie groups
The multiplicative decomposition states that if g is an element of the corresponding connected semisimple Lie group G with corresponding Iwasawa decomposition G = KAN, then g can be written as the product of three commuting elements g = sdu with s, d and u conjugate to elements of K, A and N respectively. In general the terms in the Iwasawa decomposition g = kan do not commute.