Whitehead's lemma (Lie algebra)

In homological algebra, Whitehead's lemmas (named after J. H. C. Whitehead) represent a series of statements regarding representation theory of finite-dimensional, semisimple Lie algebras in characteristic zero. Historically, they are regarded as leading to the discovery of Lie algebra cohomology.

One usually makes the distinction between Whitehead's first and second lemma for the corresponding statements about first and second order cohomology, respectively, but there are similar statements pertaining to Lie algebra cohomology in arbitrary orders which are also attributed to Whitehead.

The first Whitehead lemma is an important step toward the proof of Weyl's theorem on complete reducibility.

Statements
Without mentioning cohomology groups, one can state Whitehead's first lemma as follows: Let $$\mathfrak{g}$$ be a finite-dimensional, semisimple Lie algebra over a field of characteristic zero, V a finite-dimensional module over it, and $$f\colon \mathfrak{g} \to V$$ a linear map such that
 * $$f([x, y]) = xf(y) - yf(x)$$.

Then there exists a vector $$v \in V$$ such that $$f(x) = xv$$ for all $$x \in \mathfrak{g}$$. In terms of Lie algebra cohomology, this is, by definition, equivalent to the fact that $$H^1(\mathfrak{g},V) = 0$$ for every such representation. The proof uses a Casimir element (see the proof below).

Similarly, Whitehead's second lemma states that under the conditions of the first lemma, also $$H^2(\mathfrak{g},V) = 0$$.

Another related statement, which is also attributed to Whitehead, describes Lie algebra cohomology in arbitrary order: Given the same conditions as in the previous two statements, but further let $$V$$ be irreducible under the $$\mathfrak{g}$$-action and let $$\mathfrak{g}$$ act nontrivially, so $$\mathfrak{g} \cdot V \neq 0$$. Then $$H^q(\mathfrak{g},V) = 0$$ for all $$q \geq 0$$.

Proof
As above, let $$\mathfrak{g}$$ be a finite-dimensional semisimple Lie algebra over a field of characteristic zero and $$\pi: \mathfrak{g} \to \mathfrak{gl}(V)$$ a finite-dimensional representation (which is semisimple but the proof does not use that fact).

Let $$\mathfrak{g} = \operatorname{ker}(\pi) \oplus \mathfrak{g}_1$$ where $$\mathfrak{g}_1$$ is an ideal of $$\mathfrak{g}$$. Then, since $$\mathfrak{g}_1$$ is semisimple, the trace form $$(x, y) \mapsto \operatorname{tr}(\pi(x)\pi(y))$$, relative to $$\pi$$, is nondegenerate on $$\mathfrak{g}_1$$. Let $$e_i$$ be a basis of $$\mathfrak{g}_1$$ and $$e^i$$ the dual basis with respect to this trace form. Then define the Casimir element $$c$$ by
 * $$c = \sum_i e_i e^i,$$

which is an element of the universal enveloping algebra of $$\mathfrak g_1$$. Via $$\pi$$, it acts on V as a linear endomorphism (namely, $$\pi(c) = \sum_i \pi(e_i) \circ \pi(e^i) : V \to V$$.) The key property is that it commutes with $$\pi(\mathfrak{g})$$ in the sense $$\pi(x)\pi(c) = \pi(c)\pi(x)$$ for each element $$x \in \mathfrak{g}$$. Also, $$\operatorname{tr}(\pi(c)) = \sum \operatorname{tr}(\pi(e_i)\pi(e^i)) = \dim \mathfrak{g}_1.$$

Now, by Fitting's lemma, we have the vector space decomposition $$V = V_0 \oplus V_1$$ such that $$\pi(c) : V_i \to V_i$$ is a (well-defined) nilpotent endomorphism for $$i = 0$$ and is an automorphism for $$i = 1$$. Since $$\pi(c)$$ commutes with $$\pi(\mathfrak{g})$$, each $$V_i$$ is a $$\mathfrak{g}$$-submodule. Hence, it is enough to prove the lemma separately for $$V = V_0$$ and $$V = V_1$$.

First, suppose $$\pi(c)$$ is a nilpotent endomorphism. Then, by the early observation, $$\dim(\mathfrak{g}/\operatorname{ker}(\pi)) = \operatorname{tr}(\pi(c)) = 0$$; that is, $$\pi$$ is a trivial representation. Since $$\mathfrak{g} = [\mathfrak{g}, \mathfrak{g}]$$, the condition on $$f$$ implies that $$f(x) = 0$$ for each $$x \in \mathfrak{g}$$; i.e., the zero vector $$v = 0$$ satisfies the requirement.

Second, suppose $$\pi(c)$$ is an automorphism. For notational simplicity, we will drop $$\pi$$ and write $$x v = \pi(x)v$$. Also let $$(\cdot, \cdot)$$ denote the trace form used earlier. Let $$w = \sum e_i f(e^i)$$, which is a vector in $$V$$. Then
 * $$x w = \sum_i e_i x f(e^i) + \sum_i [x, e_i] f(e^i).$$

Now,
 * $$[x, e_i] = \sum_j ([x, e_i], e^j) e_j = -\sum_j ([x, e^j], e_i) e_j$$

and, since $$[x, e^j] = \sum_i ([x, e^j], e_i) e^i$$, the second term of the expansion of $$xw$$ is
 * $$-\sum_j e_j f([x, e^j]) = -\sum_i e_i (x f(e^i) - e^i f(x)).$$

Thus,
 * $$x w = \sum_i e_i e^i f(x) = c f(x).$$

Since $$c$$ is invertible and $$c^{-1}$$ commutes with $$x$$, the vector $$v = c^{-1}w$$ has the required property. $$\square$$