Hook length formula

In combinatorial mathematics, the hook length formula is a formula for the number of standard Young tableaux whose shape is a given Young diagram. It has applications in diverse areas such as representation theory, probability, and algorithm analysis; for example, the problem of longest increasing subsequences. A related formula gives the number of semi-standard Young tableaux, which is a specialization of a Schur polynomial.

Definitions and statement
Let $$\lambda=(\lambda_1\geq \cdots\geq \lambda_k)$$ be a partition of $$n=\lambda_1+\cdots+\lambda_k$$. It is customary to interpret $$\lambda$$ graphically as a Young diagram, namely a left-justified array of square cells with $$k$$ rows of lengths $$\lambda_1,\ldots,\lambda_k$$. A (standard) Young tableau of shape $$\lambda$$ is a filling of the $$n$$ cells of the Young diagram with all the integers $$\{1,\ldots,n\}$$, with no repetition, such that each row and each column form increasing sequences. For the cell in position $$(i,j)$$, in the $$i$$th row and $$j$$th column, the hook $$H_\lambda(i,j)$$ is the set of cells $$(a,b)$$ such that $$a=i$$ and $$b \ge j$$ or $$a \ge i$$ and $$b=j$$. The hook length $$h_\lambda(i,j)$$ is the number of cells in $$H_\lambda(i,j)$$.

The hook length formula expresses the number of standard Young tableaux of shape $$\lambda$$, denoted by $$f^{\lambda}$$ or $$d_\lambda$$, as
 * $$ f^\lambda = \frac{n!}{\prod h_\lambda (i,j)}, $$

where the product is over all cells $$(i,j)$$ of the Young diagram.

Examples
The figure on the right shows hook lengths for the cells in the Young diagram $$\lambda=(4,3,1,1)$$, corresponding to the partition 9 = 4 + 3 + 1 + 1. The hook length formula gives the number of standard Young tableaux as:
 * $$f^\lambda = \frac{9!}{7\cdot 5\cdot 4\cdot 3\cdot 2\cdot 2\cdot 1\cdot 1\cdot 1} = 216.$$

A Catalan number $$C_n$$counts Dyck paths with $$n$$ steps going up (U) interspersed with $$n$$ steps going down (D), such that at each step there are never more preceding D's than U's. These are in bijection with the Young tableaux of shape $$ \lambda=(n,n) $$: a Dyck path corresponds to the tableau whose first row lists the positions of the U-steps, while the second row lists the positions of the D-steps. For example, UUDDUD correspond to the tableaux with rows 125 and 346.

This shows that $$ C_n = f^{(n,n)} $$, so the hook formula specializes to the well-known product formula


 * $$ C_n = \frac{(2n)!}{(n+1)(n)\cdots(3)(2)(n)(n-1)\cdots(2)(1)} = \frac{(2n)!}{(n+1)!\,n!} = \frac{1}{n{+}1}\binom{2n}{n}.$$

History
There are other formulas for $$f^\lambda$$, but the hook length formula is particularly simple and elegant. A less convenient formula expressing $$f^\lambda$$ in terms of a determinant was deduced independently by Frobenius and Young in 1900 and 1902 respectively using algebraic methods. MacMahon found an alternate proof for the Young–Frobenius formula in 1916 using difference methods.

The hook length formula itself was discovered in 1953 by Frame, Robinson, and Thrall as an improvement to the Young–Frobenius formula. Sagan describes the discovery as follows. "One Thursday in May of 1953, Robinson was visiting Frame at Michigan State University. Discussing the work of Staal (a student of Robinson), Frame was led to conjecture the hook formula. At first Robinson could not believe that such a simple formula existed, but after trying some examples he became convinced, and together they proved the identity. On Saturday they went to the University of Michigan, where Frame presented their new result after a lecture by Robinson. This surprised Thrall, who was in the audience, because he had just proved the same result on the same day!"

Despite the simplicity of the hook length formula, the Frame–Robinson–Thrall proof is not very insightful and does not provide any intuition for the role of the hooks. The search for a short, intuitive explanation befitting such a simple result gave rise to many alternate proofs. Hillman and Grassl gave the first proof that illuminates the role of hooks in 1976 by proving a special case of the Stanley hook-content formula, which is known to imply the hook length formula. Greene, Nijenhuis, and Wilf found a probabilistic proof using the hook walk in which the hook lengths appear naturally in 1979. Remmel adapted the original Frame–Robinson–Thrall proof into the first bijective proof for the hook length formula in 1982. A direct bijective proof was first discovered by Franzblau and Zeilberger in 1982. Zeilberger also converted the Greene–Nijenhuis–Wilf hook walk proof into a bijective proof in 1984. A simpler direct bijection was announced by Pak and Stoyanovskii in 1992, and its complete proof was presented by the pair and Novelli in 1997.

Meanwhile, the hook length formula has been generalized in several ways. R. M. Thrall found the analogue to the hook length formula for shifted Young Tableaux in 1952. Sagan gave a shifted hook walk proof for the hook length formula for shifted Young tableaux in 1980. Sagan and Yeh proved the hook length formula for binary trees using the hook walk in 1989. Proctor gave a poset generalization (see below).

Knuth's heuristic argument
The hook length formula can be understood intuitively using the following heuristic, but incorrect, argument suggested by D. E. Knuth. Given that each element of a tableau is the smallest in its hook and filling the tableau shape at random, the probability that cell $$(i,j)$$ will contain the minimum element of the corresponding hook is the reciprocal of the hook length. Multiplying these probabilities over all $$i$$ and $$j$$ gives the formula. This argument is fallacious since the events are not independent.

Knuth's argument is however correct for the enumeration of labellings on trees satisfying monotonicity properties analogous to those of a Young tableau. In this case, the 'hook' events in question are in fact independent events.

Probabilistic proof using the hook walk
This is a probabilistic proof found by C. Greene, A. Nijenhuis, and H. S. Wilf in 1979. Define


 * $$e_\lambda = \frac{n!}{\prod_{(i,j)\in Y(\lambda)}h_\lambda(i,j)}.$$

We wish to show that $$f^\lambda = e_\lambda$$. First,


 * $$f^\lambda = \sum_{\mu\uparrow\lambda}f^\mu, $$

where the sum runs over all Young diagrams $$\mu$$ obtained from $$ \lambda $$ by deleting one corner cell. (The maximal entry of the Young tableau of shape $$\lambda$$ occurs at one of its corner cells, so deleting it gives a Young tableaux of shape $$\mu$$.)

We define $$f^\emptyset=1$$and $$e_{\emptyset}=1$$, so it is enough to show the same recurrence


 * $$e_\lambda = \sum_{\mu\uparrow\lambda}e_\mu,$$

which would imply $$f^\lambda=e_\lambda$$ by induction. The above sum can be viewed as a sum of probabilities by writing it as


 * $$ \sum_{\mu\uparrow\lambda}\frac{e_\mu}{e_\lambda}=1.$$

We therefore need to show that the numbers $$\frac{e_\mu}{e_\lambda}$$ define a probability measure on the set of Young diagrams $$\mu$$ with $$\mu\uparrow\lambda$$. This is done in a constructive way by defining a random walk, called the hook walk, on the cells of the Young diagram $$\lambda$$, which eventually selects one of the corner cells of $$\lambda$$ (which are in bijection with diagrams $$\mu$$ for which $$\mu\uparrow\lambda$$). The hook walk is defined by the following rules.


 * 1) Pick a cell uniformly at random from $$|\lambda|$$ cells. Start the random walk from there.
 * 2) Successor of current cell $$(i,j)$$ is chosen uniformly at random from the hook $$H_{\lambda}(i,j)\setminus \{(i,j)\}$$.
 * 3) Continue until you reach a corner cell $$\textbf{c}$$.

Proposition: For a given corner cell $$(a,b)$$ of $$\lambda$$, we have


 * $$\mathbb{P}\left(\textbf{c}=(a,b)\right)=\frac{e_\mu}{e_\lambda},$$

where $$\mu=\lambda\setminus\{(a,b)\}$$.

Given this, summing over all corner cells $$(a,b)$$ gives $$ \sum_{\mu\uparrow\lambda}\frac{e_{\mu}}{e_{\lambda}}=1$$ as claimed.

Connection to representations of the symmetric group
The hook length formula is of great importance in the representation theory of the symmetric group $$S_n$$, where the number $$f^\lambda$$ is known to be equal to the dimension of the complex irreducible representation $$V_\lambda$$ associated to $$\lambda$$.

Detailed discussion
The complex irreducible representations $$ V_{\lambda} $$ of the symmetric group are indexed by partitions $$ \lambda $$ of $$ n $$ (see Specht module). Their characters are related to the theory of symmetric functions via the Hall inner product:


 * $$ \chi^\lambda(w)=\langle s_\lambda, p_{\tau(w)}\rangle, $$

where $$ s_{\lambda} $$ is the Schur function associated to $$ \lambda $$ and $$ p_{\tau(w)} $$ is the power-sum symmetric function of the partition $$ \tau(w) $$ associated to the cycle decomposition of $$ w $$. For example, if $$ w=(154)(238)(6)(79) $$ then $$ \tau(w) = (3,3,2,1) $$.

Since the identity permutation $$e$$ has the form $$ e=(1)(2)\cdots(n) $$ in cycle notation, $$ \tau(e) = (1,\ldots,1)=1^{(n)} $$, the formula says


 * $$ f^\lambda \,=\, \dim V_\lambda \,=\,\chi^\lambda(e) \,=\, \langle s_\lambda, p_{1^{(n)}}\rangle $$

The expansion of Schur functions in terms of monomial symmetric functions uses the Kostka numbers:


 * $$ s_\lambda= \sum_\mu K_{\lambda\mu}m_\mu,$$

Then the inner product with $$ p_{1^{(n)}} = h_{1^{(n)}} $$ is $$ K_{\lambda 1^{(n)}} $$, because $$  \langle m_{\mu},h_{\nu} \rangle = \delta_{\mu\nu} $$. Note that $$ K_{\lambda 1^{(n)}} $$ is equal to $$ f^{\lambda}=\dim V_\lambda $$, so that $$ \textstyle \sum_{\lambda\vdash n} \left(f^{\lambda}\right)^2 = n! $$ from considering the regular representation of $$S_n$$, or combinatorially from the  Robinson–Schensted–Knuth correspondence.

The computation also shows that:


 * $$ (x_1+x_2+\cdots+x_k)^n = \sum_{\lambda\vdash n} s_\lambda f^\lambda. $$

This is the expansion of $$ p_{1^{(n)}} $$ in terms of Schur functions using the coefficients given by the inner product, since $$ \langle s_{\mu},s_{\nu}\rangle = \delta_{\mu\nu}  $$. The above equality can be proven also checking the coefficients of each monomial at both sides and using the Robinson–Schensted–Knuth correspondence or, more conceptually, looking at the decomposition of $$V^{\otimes n} $$ by irreducible $$ GL(V) $$ modules, and taking characters. See Schur–Weyl duality.

Proof of hook formula using Frobenius formula
Source:

By the above considerations
 * $$ p_{1^{(n)}} = \sum_{\lambda\vdash n} s_{\lambda}f^{\lambda} $$

so that
 * $$ \Delta(x)p_{1^{(n)}} = \sum_{\lambda\vdash n} \Delta(x)s_{\lambda}f^{\lambda} $$

where $$ \Delta(x) = \prod_{i<j} (x_i-x_j)$$ is the Vandermonde determinant.

For the partition $$ \lambda = (\lambda_1\geq\cdots\geq\lambda_k) $$, define $$ l_i = \lambda_i+k-i $$ for $$ i=1,\ldots,k $$. For the following we need at least as many variables as rows in the partition, so from now on we work with $$n $$ variables $$x_1,\cdots,x_n$$.

Each term $$ \Delta(x)s_{\lambda} $$ is equal to
 * $$ a_{(\lambda_1+k-1, \lambda_2+k-2, \dots, \lambda_k)} (x_1, x_2, \dots , x_k) \ =\

\det\! \left[ \begin{matrix} x_1^{l_1} & x_2^{l_1} & \dots & x_k^{l_1} \\ x_1^{l_2} & x_2^{l_2} & \dots & x_k^{l_2} \\ \vdots & \vdots & \ddots & \vdots \\ x_1^{l_k} & x_2^{l_k} & \dots & x_k^{l_k} \end{matrix} \right] $$

(See Schur function.) Since the vector $$ (l_1,\ldots, l_k) $$ is different for each partition, this means that the coefficient of $$ x_1^{l_1}\cdots x_k^{l_k} $$ in $$\Delta(x)p_{1^{(n)}}$$, denoted $$\left[\Delta(x)p_{1^{(n)}}\right]_{l_1,\cdots,l_k}$$, is equal to $$ f^{\lambda} $$. This is known as the Frobenius Character Formula, which gives one of the earliest proofs.

It remains only to simplify this coefficient. Multiplying


 * $$ \Delta(x) \ =\ \sum_{w\in S_n} \sgn(w) x_1^{w(1)-1}x_2^{w(2)-1}\cdots x_k^{w(k)-1} $$

and


 * $$ p_{1^{(n)}} \ =\ (x_1+x_2+\cdots+x_k)^n \ =\ \sum \frac{n!}{d_1!d_2!\cdots d_k!} x_1^{d_1}x_2^{d_2}\cdots x_k^{d_k} $$

we conclude that our coefficient as


 * $$ \sum_{w\in S_n} \sgn(w)\frac{n!}{(l_1-w(1)+1)!\cdots (l_k-w(k)+1)!} $$

which can be written as


 * $$ \frac{n!}{l_1!l_2!\cdots l_k!} \sum_{w\in S_n} \sgn(w)\left[(l_1)(l_1-1)\cdots(l_1-w(1)+2)\right] \left[(l_2)(l_2-1)\cdots(l_2-w(2)+2)\right]\left[(l_k)(l_k-1)\cdots(l_k-w(k)+2)\right] $$

The latter sum is equal to the following determinant


 * $$ \det \left[ \begin{matrix} 1 & l_1 & l_1(l_1-1) & \dots & \prod_{i=0}^{k-2} (l_1-i) \\

1 & l_2 & l_2(l_2-1) & \dots & \prod_{i=0}^{k-2} (l_2-i)\\ \vdots & \vdots & \vdots& \ddots & \vdots \\ 1 & l_k & l_k(l_k-1) & \dots & \prod_{i=0}^{k-2} (l_k-i) \end{matrix} \right] $$

which column-reduces to a Vandermonde determinant, and we obtain the formula


 * $$ f^\lambda = \frac{n!}{l_1!\,l_2!\cdots l_k!} \prod_{i i} (l_i - l_j) \cdot \prod_{j\leq\lambda_i} h_\lambda(i,j) $$, where the latter product runs over the $$i$$th row of the Young diagram.

Connection to longest increasing subsequences
The hook length formula also has important applications to the analysis of longest increasing subsequences in random permutations. If $$\sigma_n$$ denotes a uniformly random permutation of order $$n$$, $$L(\sigma_n)$$ denotes the maximal length of an increasing subsequence of $$\sigma_n$$, and $$\ell_n$$ denotes the expected (average) value of $$L(\sigma_n)$$, Anatoly Vershik and Sergei Kerov and independently Benjamin F. Logan and Lawrence A. Shepp showed that when $$n$$ is large, $$\ell_n$$ is approximately equal to $$2 \sqrt{n}$$. This answers a question originally posed by Stanislaw Ulam. The proof is based on translating the question via the Robinson–Schensted correspondence to a problem about the limiting shape of a random Young tableau chosen according to Plancherel measure. Since the definition of Plancherel measure involves the quantity $$f^\lambda$$, the hook length formula can then be used to perform an asymptotic analysis of the limit shape and thereby also answer the original question.

The ideas of Vershik–Kerov and Logan–Shepp were later refined by Jinho Baik, Percy Deift and Kurt Johansson, who were able to achieve a much more precise analysis of the limiting behavior of the maximal increasing subsequence length, proving an important result now known as the Baik–Deift–Johansson theorem. Their analysis again makes crucial use of the fact that $$f^\lambda$$ has a number of good formulas, although instead of the hook length formula it made use of one of the determinantal expressions.

Related formulas
The formula for the number of Young tableaux of shape $$\lambda$$ was originally derived from the Frobenius determinant formula in connection to representation theory:


 * $$ f(\lambda_1,\ldots,\lambda_k) = \frac{n! \,\Delta(\lambda_k,\lambda_{k-1}+1,\ldots,\lambda_1 + k - 1)}{\lambda_k ! \,(\lambda_{k-1} + 1)! \cdots (\lambda_1 + k - 1)!}.$$

Hook lengths can also be used to give a product representation to the generating function for the number of reverse plane partitions of a given shape. If $λ$ is a partition of some integer $p$, a reverse plane partition of $n$ with shape $λ$ is obtained by filling in the boxes in the Young diagram with non-negative integers such that the entries add to $n$ and are non-decreasing along each row and down each column. The hook lengths $$ h_1,\dots,h_p $$ can be defined as with Young tableaux. If $π_{n}$ denotes the number of reverse plane partitions of $n$ with shape $λ$, then the generating function can be written as
 * $$ \sum_{n=0}^\infty \pi_n x^n = \prod_{k=1}^p (1-x^{h_k})^{-1} $$

Stanley discovered another formula for the same generating function. In general, if $$ A $$ is any poset with $$ n $$ elements, the generating function for reverse $$ A $$-partitions is
 * $$ \frac{P(x)}{(1-x)(1-x^2)\cdots(1-x^n)} $$

where $$ P(x) $$ is a polynomial such that $$P(1)$$ is the number of linear extensions of $$ A $$.

In the case of a partition $$ \lambda $$, we are considering the poset in its cells given by the relation
 * $$ (i,j) \leq (i',j') \iff i\leq i' \qquad \textrm{and} \qquad j\leq j'$$.

So a linear extension is simply a standard Young tableau, i.e. $$ P(1) = f^{\lambda} $$

Proof of hook formula using Stanley's formula
Combining the two formulas for the generating functions we have
 * $$ \frac{P(x)}{(1-x)(1-x^2)\cdots(1-x^n)} = \prod_{(i,j)\in \lambda} (1-x^{h_{(i,j)}})^{-1} $$

Both sides converge inside the disk of radius one and the following expression makes sense for $$ |x| < 1 $$


 * $$ P(x) = \frac{\prod_{k=1}^n (1-x^k)}{\prod_{(i,j)\in \lambda} (1-x^{h_{(i,j)}})}. $$

It would be violent to plug in 1, but the right hand side is a continuous function inside the unit disk and a polynomial is continuous everywhere so at least we can say


 * $$ P(1) = \lim_{x\to 1} \frac{\prod_{k=1}^n (1-x^k)}{\prod_{(i,j)\in \lambda} (1-x^{h_{(i,j)}})}.  $$

Applying L'Hôpital's rule $$ n $$ times yields the hook length formula


 * $$ P(1) = \frac{n!}{\prod_{(i,j)\in \lambda}h_{(i,j)}}. $$

Semi-standard tableaux hook length formula
The Schur polynomial $$s_\lambda(x_1,\ldots,x_k)$$ is the generating function of semistandard Young tableaux with shape $$\lambda$$ and entries in $$\{1,\ldots,k\}$$. Specializing this to $$x_i = 1$$ gives the number of semi-standard tableaux, which can be written in terms of hook lengths:"$s_\lambda(1,\ldots,1)\ =\ \prod_{(i,j)\in \mathrm{Y}(\lambda)}\frac{k-i+j}{h_\lambda(i,j)}.$"The Young diagram $$\lambda$$ corresponds to an irreducible representation of the special linear group $$\mathrm{SL}_k(\mathbb C)$$, and the Schur polynomial is also the character of the diagonal matrix $$\mathrm{diag}(x_1,\ldots,x_k)$$ acting on this representation. The above specialization is thus the dimension of the irreducible representation, and the formula is an alternative to the more general Weyl dimension formula.

We may refine this by taking the principal specialization of the Schur function in the variables $$ 1,t,t^2\!,t^3\!, \ldots $$ :


 * $$ s_{\lambda}(1,t,t^2,\ldots) \ =\ \frac{t^{n\left( \lambda\right)}}{\prod_{(i,j)\in Y(\lambda)}(1-t^{h_{\lambda}(i,j)})}, $$

where $$ n(\lambda) = \sum_i (i{-}1)\lambda_i = \sum_i \tbinom{\lambda_i'}{2} $$ for the conjugate partition $$ \lambda' $$.

Skew shape formula
There is a generalization of this formula for skew shapes,
 * $$ s_{\lambda/\mu}(1,t,t^2,\cdots) = \sum_{S \in E(\lambda/\mu)} \prod_{(i,j) \in \lambda\setminus S} \frac{t^{\lambda_j'-i}}{1-t^{h(i,j)}} $$

where the sum is taken over excited diagrams of shape $$\lambda$$ and boxes distributed according to $$\mu$$.

A variation on the same theme is given by Okounkov and Olshanski of the form

\frac{\dim\lambda/\mu}{\dim\lambda}=\frac{s^*_\mu(\lambda)}{|\lambda|!/|\mu|!}, $$ where $$s^*_\mu$$ is the so-called shifted Schur function $$s^*_\mu(x_1,\dots,x_n)=\frac{\det[(x_i+n-1)!/(\mu_j+n-j)!]}{\det[(x_i+n-i)!/(n-j)!]}$$.

Generalization to d-complete posets
Young diagrams can be considered as finite order ideals in the poset $$\mathbb{N}\times\mathbb{N}$$, and standard Young tableaux are their linear extensions. Robert Proctor has given a generalization of the hook length formula to count linear extensions of a larger class of posets generalizing both trees and skew diagrams.