User:FilemonNeira/sandbox

$$  $$

Schur Functions
There is another basis, called the schur functions, with strong connections to combinatorics and representation theory. We give several equivalent definitions

Tableaux
We first give a purely combinatorial definition

$$ s_{\lambda} = \displaystyle \sum_{T} x^{cont(T)} $$

where the sum is over all Semi-Standard Young Tableaux of shape $$ \lambda $$ and $$ cont(T) $$ is the monomial representing its content. An example in 3 variables is

$$ s_{2,1} = x_1^2x_2 + x_1^2x_3 + x_1x_2x_3+x_1x_2x_3 +x_1x_2^2 + x_1x_3^2 + x_2^2x_3 + x_2x_3^2$$

corresponding to the tableaux

$$ [11,2], [11,3], [12,3], [13,2], [12,2], [13,3], [22,3], [23,3] $$

Here $$ [ab,c] $$ its just a one line notation for the more familiar

$$a b$$

$$c $$

Note in the example that the schur function is indeed symmetric, a fact which is not direct from its definition. In general, the symmetry of schur functions requires a proof, not hard, and once we know it, we can write


 * $$ s_\lambda= \sum_\mu K_{\lambda\mu}m_\mu.\ $$

with the following natural interpretation: $$ K_{\lambda\mu} $$ is the number of Semi-Standard Young Tableaux (SSYT for short) with shape  $$ \lambda $$ and content $$\mu$$. A careful analysis reveals that $$ K_{\lambda\mu} $$ is nonzero if and only if $$ \lambda \geq \mu$$. Even more $$ K_{\lambda\lambda} = 1 $$, with the only tableaux being the one with all 1's in the first row, 2's in the second, and so on. So in fact we have


 * $$ s_\lambda= \sum_{\lambda\geq\mu} K_{\lambda\mu}m_\mu.\ $$

Which means that the transition matrix will be lower unitriangular with respect to any order extending the dominance order. This shows that the schur functions are an integral basis for the ring of symmetric functions.

Remark: this is stronger that triangularity since being smaller in the linear order doesn't guarantees being smaller in dominance order. It is triangular, but several other positions are forced to be zero too.

One advantage of the combinatorial definition is that it makes sense also to define schur functions indexed by skew shapes.

Ratio of determinants
There is also a purely algebraic definition of $$ s_{\lambda} $$:

For any integer vector $$ (l_1,l_2,\cdots, l_k) $$ define
 * $$ a_{(l_1,l_2,\cdots, l_k) } (x_1, x_2, \dots, x_k) =

\det \left[ \begin{matrix} x_1^{l_1} & x_2^{l_1} & \dots & x_k^{l_1} \\ x_1^{l_2} & x_2^{l_2} & \dots & x_k^{l_2} \\ \vdots & \vdots & \ddots & \vdots \\ x_1^{l_k} & x_2^{l_k} & \dots & x_k^{l_k} \end{matrix} \right] $$

Set $$ \delta = (n-1,n-2,\cdots,1,0) $$ and note that $$ a_{\delta} $$ is the Vandermonde determinant $$\Delta(x)$$. We have the alternative expression for the schur functions


 * $$ s_{\lambda} = \dfrac{a_{\lambda+\delta}}{a_{\delta}} $$

Remark: This makes sense even if $$ \lambda $$ is not a partition. In its useful to define $$s_{\upsilon}$$ this way for any integer vector. If nonzero, $$s_{\upsilon}$$ will be equal (up to sign) to $$s_{\lambda}$$ for a unique partition.

One common application is the following: to find the coefficient of $$ s_{\lambda} $$ in a symmetric function $$f(x)$$ we can just compute the coefficient of $$x^{\lambda+\delta}$$ in $$ \Delta(x)f(x) $$. For instance the hook length formula can be proven this way (See Wikipedia)

More determinants
From the first definition we were able to expand schur in terms of the monomial basis. What about the other bases? The answer for the homogeneous (and elementary) basis is given by the Jacobi-Trudi determinant


 * $$ s_{\lambda} = \det(h_{\lambda_i-i+j})_{i,j=1}^n $$

Applying the involution $$ \omega $$ we get


 * $$ s_{\lambda} = \det(e_{\lambda'_i-i+j})_{i,j=1}^n $$

Relation with the representation theory of the symmetric group
The remarkable connection between schur functions and the characters of the irreducible representations of the symmetric group is given by the following magical formula

$$ \chi^{\lambda}(w)=\langle s_{\lambda}, p_{\tau(w)}\rangle $$

where $$ \chi^{\lambda}(w)= $$ is the value of the character of the irreducible representation $$ V_{\lambda} $$ in the element $$w$$ and $$ p_{\tau(w)} $$ is the power symmetric function of the partition $$ \tau(w) $$ associated to the cycle decomposition of $$ w $$. For example, if $$ w=(154)(238)(6)(79) $$ then $$ \tau(w) = (3,3,2,1) $$.

Since $$ e=(1)(2)\cdots(n) $$ in cycle notation, $$ \tau(e) = 1+1+\cdots+1=1^{(n)} $$. Then the formula says


 * $$ \dim V_\lambda =\chi^\lambda(e) = \langle s_\lambda, p_{1^{(n)}}\rangle $$

Considering the expansion of schur functions in terms on monomial symmetric functions using the Kostka numbers


 * $$ s_\lambda= \sum_\mu K_{\lambda\mu}m_\mu.\ $$

the inner product with $$ p_{1^{(n)}} = h_{1^{(n)}} $$ is $$ K_{\lambda 1^{(n)}} $$, because $$  \langle m_{\mu},h_{\nu} \rangle = \delta_{\mu\nu} $$. Note that $$ K_{\lambda 1^{(n)}} $$ is equal to $$ f^{\lambda} $$, the number of Standard Young Tableaux of shape $$ \lambda $$. Hence


 * $$ \dim V_\lambda = f^{\lambda} $$

and


 * $$ \langle s_\lambda, p_{1^{(n)}}\rangle = f^{\lambda} $$

which will be useful later

The magical formula is equivalent to:


 * $$s_{\lambda} = \dfrac{1}{n!} \sum_{w\in S_n} \chi^{\lambda}(w)p_{\tau(w)}$$

This gives a conceptual proof of the identity $$ \omega(s_{\lambda}) = s_{\lambda'} $$, by comparing the coefficients and taking into account that $$ \chi^{\lambda}(w) = \sgn(w) \chi^{\lambda'}(w) $$ because tensoring by the sign representation gives the irreducible representation for the conjugate partition.

Frobenius Series
The expansion in terms of the power symmetric functions suggest we define the following map: The Frobenius Characteristic map F takes class functions on the symmetric group to symmetric function by sending $$ \chi^{\lambda} \to s_{\lambda} $$ and extending by linearity. An important fact is that F is an isometry with respect to the inner products.

Remark F does not commute with multiplication

The formula:


 * $$ s_\lambda= \sum_\mu K_{\lambda\mu}m_\mu.\ $$

is equivalent to


 * $$ h_\mu= \sum_\mu K_{\lambda\mu}s_\lambda.\ $$

And this also comes from representation theory. There is a module $$ M^{\mu} $$ decomposition in $$ V^{\lambda} $$ is given by the multiplicities $$ K_{\lambda\mu} $$, and the above equation is simply the Frobenius translation. (See Sagan)

Let $$ A = \bigoplus_r A_r $$ be a graded $$ S_n $$ module, with each $$ A_r $$ finite dimensional, define the Frobenius Series of A as


 * $$ F_A(z;t) = \sum_r t^r F\textrm{char}(A_r) $$

where $$ F\textrm{char}(A_r) $$ is the image of $$ \textrm{char} (A_r) $$ under the Frobenius map.

And similarly if $$ A = \bigoplus_{r,s} A_{r,s} $$ is a doubly graded $$ S_n $$ module, with each $$ A_{r,s} $$ finite dimensional, define the Frobenius Series of A as


 * $$ F_A(z;q,t) = \sum_r t^rq^s F\textrm{char}(A_r) $$

It is clear that the Frobenius series expand positively in terms of schur functions, because the coefficients of schur come from the multiplicities (obviously positive) of the irreducibles on each graded piece. The proof of the positivity conjecture for Macdonald Polynomials consist of finding a module whose Frobenius series is the desired symmetric function.

Characters of the General Linear group
Another way of thinking about schur functions is as the characters of the irreducible representations of $$ GL(n) $$. Lets go through a simple example:

The first nontrivial representation of $$ GL(2) $$ that comes into mind is itself, which comes from the natural action on $$ k^2 $$, call this representation $$ V $$. The character is just the trace, which is, as function of the eigenvalues, equal to $$ x_1+x_2 $$. What happens if we tensor $$ V\otimes V $$? The character gets raised by two and we have the identity
 * $$ (x_1+x_2)^2 = x_1^2+x_2^2 + 2x_1x_2 = x_1^2+x_2^2 + x_1x_2 + x_1x_2 = s_{2}(x_1,x_2) + s_{1,1}(x_1,x_2) $$

Since decomposing the characters gives the information to decompose the representation, the above identity says that $$ V\otimes V $$ decomposes into two irreducibles one corresponding to the partition $$ (2) $$ and other to the partition $$ (1,1) $$. This are the symmetric and antisymmetric part, respectively.

More generally, consider $$ GL(n) $$ and its defining representation $$ V $$ given by the natural action on $$ k^n $$. If we want to decompose $$ V^{\otimes m} $$ into irreducibles we will need to write $$ (x_1+x_2+\cdots+x_n)^m $$ in term of schur. The remarkable formula, in the crossroads between symmetric functions, representation theory and combinatorics is


 * $$ (x_1+x_2+\cdots+x_k)^n = \sum_{\lambda\vdash n} s_\lambda f^\lambda. $$

Which is the expansion of $$ p_{1^{(n)}} $$ in terms of schur functions using the coefficients given by the inner product, because $$ \langle s_{\mu},s_{\nu}\rangle = \delta_{\mu\nu}  $$ and $$ \langle s_\lambda, p_{1^{(n)}}\rangle = f^{\lambda} $$. The above equality can be proven also checking the coefficients of each monomial at both sides and using the Robinson–Schensted–Knuth correspondence. For a more detailed analysis of the decomposition of $$ V^{\otimes m} $$ see Schur–Weyl duality.

In this context we can express the schur functions by using Weyl's Character Formula


 * $$ s_{\lambda}(x) = \sum_{w\in S_n} w\left(\dfrac{x^{\lambda}}{\Pi_{i<j}(1-x_j/x_i)}\right) $$

which is equivalent to the ratio of determinants.

Raising operators
For a set of variables $$ x = (x_1,x_2,\cdots) $$ define
 * $$ \Omega(x) = \prod_{i}\dfrac{1}{1-x_i} $$

Now define the Bernstein Operators $$ S^0_m $$ on symmetric functions as


 * $$ S^0_m(f) = [u^m]f(x-u^{-1})\Omega(ux) $$

In words this means: we have some variables x, and we add one more variable u, so $$ ux = (ux_1,ux_2,\cdots) $$, and $$ f(x-u^{-1}) $$ is a subtle business but it can be thought as adding the variable $$-u^{-1}$$ to the set. So $$ f(x-u^{-1})\Omega(ux) $$ is an expression containing the variables x and u. $$ [u^m] $$ takes coefficient of $$ u^m $$ in the big mess

The following theorem is fundamental, not by itself, but because a lot of the theory is developed by deforming this operator, and while the proofs are different for the other cases, there is a pattern in the proofs. so lets describe this one to get a flavor of what's going on

Theorem The Bernstein operators add a part to the indexing of the Schur function, that is, for $$ m \geq \lambda1 $$ we have $$ S^0_m s_\lambda = s_{m,\lambda} $$

Sketch of proof

We need the following ingredients. First by partial fraction expansion


 * $$ \Omega(ux) = \prod_{i}\dfrac{1}{1-ux_i} = \sum_i \dfrac{1}{1-ux_i}\prod_{j\neq i} \dfrac{1}{1-x_j/x_i} $$

The second ingredient is Weyl's character formula


 * $$ s_{\lambda}(x) = \sum_{w\in S_n} w\left(\dfrac{x^{\lambda}}{\prod_{i<j}(1-x_j/x_i)}\right) $$

By expanding its easy to check that for a polynomial f


 * $$ [u^m]f(u^{-1})\dfrac{1}{1-uz} = z^mf(z) $$

Now we're ready to stir the elements. First lets mix the first with definition, so for any f


 * $$ [u^m]f(x-u^{-1})\Omega(ux) = [u^m]f(x-u^{-1})\sum_i \dfrac{1}{1-ux_i}\prod_{j\neq i} \dfrac{1}{1-x_j/x_i} = \sum_i [u^m]f(x-u^{-1}) \dfrac{1}{1-ux_i}\prod_{j\neq i} \dfrac{1}{1-x_j/x_i}$$

because $$ [u^m] $$ is an operator, it distributes over sum. Furthermore, considering $$ f(x-u^{-1}) $$ as a polynomial in $$u^{-1}$$ we can use the third ingredient, with $$ x_i $$ playing the role of $$ z $$ to get


 * $$ \sum_i x_i^mf(x-x_i)\prod_{j\neq i} \dfrac{1}{1-x_j/x_i} = \sum_i x_i^m\prod_{j\neq i} \dfrac{f(x-x_i)}{1-x_j/x_i} $$

By the virtues of the plethystic substitution $$ f(x-x_i) $$ is the same thing as evaluating in the other variables, i.e., taking $$x_i$$ out. Now specialize to the schur function and consider the second ingredient, Weyl's formula, then it follows easily by induction, because note the factor


 * $$ x_i^m\prod_{j\neq i} \dfrac{s_{\lambda}(x-x_i)}{1-x_j/x_i} $$

is the same as for the terms in the formula for $$s_{(m,\lambda)}$$ when the permutation sends i to the first position.

This gives our final definition of schur functions


 * $$ s_\lambda = S^0_{\lambda_1}S^0_{\lambda_2}\cdots S^0_{\lambda_l}(1) $$

Final Remarks
The schur functions are a basis for the symmetric functions with the following properties

1. Lower unitriangularity with respect to monomials $$ s_\lambda= \sum_\mu K_{\lambda\mu}m_\mu.\ $$

2. Orthogonality $$ \langle s_{\mu},s_{\nu}\rangle = \delta_{\mu\nu}  $$

The Kostka numbers $$ K_{\lambda\mu} $$ have two interpretations, a combinatorial and an algebraic one. These properties are important to keep in mind while generalizing with one or two parameters.

Hall Littlewood Polynomials
We know the schur basis, and many more, for the ring of symmetric functions over a field $$ F $$. The next step of generalization is consider the field $$ F(t) $$, and twist a little bit the inner product. In contrast with Macdonald polynomials, we can give a closed expression for Hall-Littlewood polynomials

Straight definition and first properties
First we need the following $$ t $$ - analogues


 * $$ [k]_t := \dfrac{1-t^k}{1-t}= 1+t+t^2+\cdots+t^{k-1} $$


 * $$ [k]_t! := [k]_t[k-1]_t\cdots[1]_t $$

Then the "Hall-Littlewood polynomial" $$ P_{\lambda}(x;t) $$ in n variables is given by the following formula


 * $$ P_{\lambda}(x;t) = \dfrac{1}{\prod_{i\geq 0}[\alpha_i]_t!} \sum_{w\in Sn}w\left(x^{\lambda}\dfrac{\Pi_{i<j}(1-tx_j/x_i)}{\Pi_{i<j}(1-x_j/x_i)}\right) $$

Where $$ \lambda = (1^{\alpha_1},2^{\lambda2},\cdots) $$ and $$ \alpha_0 $$ is such that $$ \sum_{i\geq 0} \alpha_i = n $$

Note that when $$ t=0 $$ the denominator $$ \Pi_{i\geq 0}[\alpha_i]_t! $$ goes away and we get precisely the Weyl's character formula for the schur functions, so


 * $$ P_{\lambda}(x,0) = s_{\lambda}(x) $$

at $$ t=1 $$ the products inside cancel and we get the usual monomial funcitons


 * $$ P_{\lambda}(x,1) = m_{\lambda}(x) $$

The Hall-Littlewood polynomials will form a basis, then we can expand schur in this new basis. The "Kostka-Foulkes polynomials" $$ K_{\lambda\mu}(t) $$ are defined by


 * $$ s_{\lambda}(x) = \sum_{\mu} K_{\lambda\mu}(t) P_{\lambda}(x;t) $$

They don't deserve the name polynomials yet, because so far we just know that they are rational functions in t. But we will see why they're actual polynomials.

Definition with raising operators
Define the Jing Operators as t deformations (this is part of a general recipe to t-deform something. See Zabrocki) of the Bersntein operator in the following way


 * $$ S^t_m f = [u^m]f[X+(t-1)u^{-1}]\Omega[uX] $$

and their modified version


 * $$ \tilde{S}^t_m f = [u^m]f[X-u^{-1}]\Omega[(1-t)uX] $$

which are related by


 * $$ \tilde{S}^t_m = \Pi_{(1-t)}S^t_m\Pi^{-1}_{(1-t)} $$

where $$ \Pi_{(1-t)} $$ is the operator with the plethystic substituion $$ f\to f[X(1-t)] $$, and $$ \Pi^{-1}_{(1-t)} $$ is its inverse, namely $$ f\to f[X/(1-t)] $$

Analogously to the schur functions now defined the transformed Hall-Littlewood polynomials as


 * $$ H_{\mu}(x;t) = S^t_{\mu_1}S^t_{\mu_2}\cdots S^t_{\mu_l}(1) $$

And if we set $$ Q_{\mu}(x;t) = H_{\mu}((1-t)X;t) $$ we get


 * $$ Q_{\mu}(z;t) = \tilde{S}^t_{\mu_1}\tilde{S}^t_{\mu_2}\cdots \tilde{S}^t_{\mu_l}(1) $$

Recall that the Bernstein operators added one part to a partition. This new operators behave in a more complicated way, but of similar spirit

Theorem for Jing Operators If $$ m\geq \mu_1 \gamma $$ and $$ \lambda\geq \mu $$ then
 * $$ S^t_m s_\lambda \in \Z[t] \{ s_{\gamma} : \gamma \geq (m,\mu) \} $$

Moreover, $$ s_{(m,\lambda)} $$ appears with coefficient 1

The last part is saying something similar to the previous situation, we will get the schur function with an additional part m added, but the theorem is saying that we get also polynomial combinations of other schur functions.

By repeated use of the theorem we can conclude that


 * $$ H_{\mu}(x;t) = \sum_{\lambda\geq \mu} C_{\lambda \mu}(t) s_{\lambda}(x) $$

where $$C_{\lambda \mu}(t)$$ are polynomials with $$C_{\mu \mu}(t) = 1$$

That means that we have upper unitriangularity with respect to the schur basis.

We have analogous statements for Q (although with different proof!)

Theorem for modified Jing Operators If $$ m\geq \mu_1 \gamma $$ then
 * $$ \tilde{S}^t_m s_\lambda \in \Z[t] \{ s_{\gamma} : \gamma \leq (m,\lambda) \} $$

Moreover, $$ s_{(m,\lambda)} $$ appears with coefficient $$1-t^{\alpha}$$ where $$ \alpha $$ is the multiplicity of m as a part of $$ (m,\lambda) $$

Again by repeated use of the theorem we can conclude that


 * $$ Q_{\mu}(x;t) = \sum_{\lambda\leq \mu} B_{\lambda \mu}(t) s_{\lambda}(x) $$

where $$B_{\lambda \mu}(t)$$ are polynomials with $$B_{\mu \mu}(t) = (1-t)^{l(\mu)}\prod_{i\geq 0}[\alpha_i]_t!$$

That means that we have lower triangularity (but with a messier diagonal elements) with respect to the schur basis.

The operator $$ \Pi_{(1-t)} $$ is self adjoint for the inner product, i.e. we have


 * $$ \langle f,g[(1-t)X] \rangle = \langle f[(1-t)X],g \rangle $$

By the opposite triangularities of $$ H $$ and $$ Q=H[(1-t)X] $$ we have that if $$ \langle H_\mu, H_\upsilon[(1-t)X] \rangle \neq 0 $$ then $$ \mu \leq \upsilon $$. Passing the $$ (1-t) $$ to the other side, we obtain the opposite conclusion $$ \mu \geq \upsilon $$ and hence $$ \mu = \upsilon $$. Which implies the following claim

The transformed Hall-Littlewood polynomials are orthogonal with respect to the inner product $$ \langle f,g[(1-t)X]\rangle $$ and their self inner products are given by


 * $$ \langle H_{\mu},H_{\mu}[(1-t)X]\rangle = (1-t)^{l(\mu)}\prod_{i\geq 0}[\alpha_i]_t! $$

Now everything fits smoothly
Really. First, from the definition of que $$ Q $$ one can get the following formula by induction


 * $$ Q_{\lambda}(x;t)=(1-t)^{l(\lambda)}[n-l(\lambda)]_t! \sum_{w\in S_n} w\left(x^{\lambda} \dfrac{\Pi_{i<j}(1-tx_j/x_i)}{\Pi_{i<j}(1-x_j/x_i)} \right) $$

The relation with the original Hall-Littlewood polynomials is


 * $$ P_{\lambda}(x;t) = \dfrac{ Q_{\lambda}(x;t)}{(1-t)^{l(\lambda)}\prod_{i\geq 0}[\alpha_i]_t!}  $$

Note that the denominator is precisely the self inner product of the $$ H $$ in the inner product $$ \langle f,g[(1-t)X]\rangle $$. Classically something a bit different is defined


 * $$ \langle f,g\rangle_t = \langle f,g[X/(1-t)]\rangle $$

In this product, the basis $$ \{P_{\lambda}\} $$ and $$ \{Q_{\lambda}\} $$ are orthogonal and furthermore, they are dual! So recall that we defined the Kostka - Foulkes polynomial as


 * $$ s_{\lambda}(x) = \sum_{\mu} K_{\lambda\mu}(t) P_{\lambda}(x;t) $$

By taking inner products, and using the duality just mentioned we arrive at


 * $$ K_{\lambda\mu}(t) = \langle s_{\lambda},Q_{\mu}\rangle_t =  \langle s_{\lambda},H_{\mu}\rangle $$

But that last coefficient is equal to our previously defined polynomials $$ C_{\lambda \mu}(t) $$, showing that the Kostka-Foulkes polynomials are in fact polynomials.

Positivity of Kostka-Foulkes polynomials
It turns out that they are not just integer polynomials, but their coefficients are positive. It may not sound very interesting to show that a quantity is positive, but usually the question is implicitly asking for a interpretation. There are many different approaches here, all far from trivial. Let's review them

Deep representation theory
The work of Hotta, Lusztig, and Springer showed deep connections with representation theory. I cannot say more than a few words (that i don't even understand): They relate the Kostka-Foulkes polynomials, and a variation of them, called cocharge  Kostka-Foulkes polynomials to some hardcore math where the keywords are Unipotent Characters, local intersection homology, Springer fiber and perverse sheaves.

For now,the important thing is that they found a ring, the cohomology ring of the Springer fiber, whose Frobenius series is given by the cocharge transformed Hall-Littlewood polynomials, implying they expand schur positively.

Combinatorics of Tableaux
Lascoux and Schutzenberger proved the following simple and elegant formula, that gives a concrete meaning to each coefficient


 * $$ K_{\lambda\mu}(t) = \sum_T t^{c(T)} $$

the sum is over all SSYT of shape $$ \lambda $$ and content $$ \mu $$. The new definition is the charge $$ c(T) $$ which is easier to define in terms of cocharge $$ cc(T) $$ which is an invariant characterized by

1. Cocharge is invariant under jeu-de-taquin slides

2. Suppose the shape of $$ T $$ is disconnected, say $$ T = X \cup Y $$ with $$ X $$ above and left of $$ Y $$, and no entry of $$ X $$ is equal to 1. Then $$ S = Y \cup X $$, obtained by swapping, has $$ cc(S) = |X| - cc(X) $$

3. If $$ T $$ is a single row, then $$ cc(T) = 0 $$

And then $$ c(T) = n(\mu) - cc(T) $$. The existence of such an invariant requires proof. There is a process to compute the cocharge called catabolism.

Alternative description using tableaux
Kirillov and Reshetikhin gave the following formula


 * $$ K_{\lambda\mu}(t) = \sum_{\upsilon} t^{n(\mu)-\Sigma\mu^'_j\upsilon^1_j+\Sigma \upsilon^k_j(\upsilon^k_j-\upsilon^{k+1}_j)}\prod \dfrac{[p^k_j+\upsilon^k_j-\upsilon^k_{j+1}]_t}{[\upsilon^k_j-\upsilon^{k}_{j+1}]_t[p_j^k]_t} $$

where the sum is over all $$ (\lambda,\mu) $$ - admissible configurations $$ \upsilon $$.

While nasty, this thing has clearly positive coefficients. The origin of this formula is from a technique in mathematical physics known as Bethe ansatz, which is used to produced highest weight vectors for some tensor products. The theorem is relating :$$ K_{\lambda\mu}(t) $$ with the enumeration of highesst weight vectors in $$ V_{\mu_1}\otimes\cdots\otimes V_{\mu_r} $$ by a quantum number. For more info, stay tuned, probably Anne has something to say about in class.

Commutative Algebra
This may be the less technical. Garsi and Procesi simplified the first proof by giving a down to earth interpretation of the cohomology ring of the springer fiber $$ R_{\mu} $$. Now the action happens inside the polynomial ring $$ C[x] = C[x_1,x_2,\cdots,x_n] $$. And


 * $$ R_{\mu} = C[x]/I_{\mu} $$

For an ideal with a relatively explicit description. They manage to give generators, and finally they proof with more elementary methods that the frobenius series is the cocharge invariant


 * $$ F_{R_{\mu}}(x;t) = t^{n(\mu)}H_{\mu}(x;t^{-1}) = \sum_{\lambda} \tilde{K}_{\lambda\mu}(t)s_{\lambda} $$

where $$ \tilde{K}_{\lambda\mu}(t) = t^{n(\mu)K_{\lambda\mu}(t^{-1})} $$ is the cocharge Kostka-Foulkes poylnomial.