Hankel matrix

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example,

$$\qquad\begin{bmatrix} a & b & c & d & e \\ b & c & d & e & f \\ c & d & e & f & g \\ d & e & f & g & h \\ e & f & g & h & i \\ \end{bmatrix}.$$

More generally, a Hankel matrix is any $$n \times n$$ matrix $$A$$ of the form

$$A = \begin{bmatrix} a_0 & a_1 & a_2 & \ldots & a_{n-1} \\ a_1 & a_2 & &  &\vdots \\ a_2 & &  & & a_{2n-4} \\ \vdots & & & a_{2n-4} & a_{2n-3} \\ a_{n-1} & \ldots & a_{2n-4} & a_{2n-3} & a_{2n-2} \end{bmatrix}.$$

In terms of the components, if the $$i,j$$ element of $$A$$ is denoted with $$A_{ij}$$, and assuming $$i \le j$$, then we have $$A_{i,j} = A_{i+k,j-k}$$ for all $$k = 0,...,j-i.$$

Properties

 * Any Hankel matrix is symmetric.
 * Let $$J_n$$ be the $$n \times n$$ exchange matrix. If $$H$$ is an $$m \times n$$ Hankel matrix, then $$H = T J_n$$ where $$T$$ is an $$m \times n$$ Toeplitz matrix.
 * If $$T$$ is real symmetric, then $$H = T J_n$$ will have the same eigenvalues as $$T$$ up to sign.
 * The Hilbert matrix is an example of a Hankel matrix.
 * The determinant of a Hankel matrix is called a catalecticant.

Hankel operator
Given a formal Laurent series $$ f(z) = \sum_{n=-\infty}^N a_n z^n, $$ the corresponding Hankel operator is defined as $$ H_f : \mathbf C[z] \to \mathbf z^{-1} \mathbf Cz^{-1}. $$ This takes a polynomial $$g \in \mathbf C[z]$$ and sends it to the product $$fg$$, but discards all powers of $$z$$ with a non-negative exponent, so as to give an element in $$z^{-1} \mathbf Cz^{-1}$$, the formal power series with strictly negative exponents. The map $$H_f$$ is in a natural way $$\mathbf C[z]$$-linear, and its matrix with respect to the elements $$1, z, z^2, \dots \in \mathbf C[z]$$ and $$z^{-1}, z^{-2}, \dots \in z^{-1}\mathbf Cz^{-1}$$ is the Hankel matrix $$\begin{bmatrix} a_1 & a_2 & \ldots \\ a_2 & a_3 & \ldots \\ a_3 & a_4 & \ldots \\ \vdots & \vdots & \ddots \end{bmatrix}.$$ Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if $$f$$ is a rational function, that is, a fraction of two polynomials $$ f(z) = \frac{p(z)}{q(z)}. $$

Approximations
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix $$A$$ does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

Hankel matrix transform
The Hankel matrix transform, or simply Hankel transform, of a sequence $$b_k$$ is the sequence of the determinants of the Hankel matrices formed from $$b_k$$. Given an integer $$n > 0$$, define the corresponding $$(n \times n)$$-dimensional Hankel matrix $$B_n$$ as having the matrix elements $$[B_n]_{i,j} = b_{i+j}.$$ Then the sequence $$h_n$$ given by $$ h_n = \det B_n $$ is the Hankel transform of the sequence $$b_k.$$ The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes $$ c_n = \sum_{k=0}^n {n \choose k} b_k $$ as the binomial transform of the sequence $$b_n$$, then one has $$\det B_n = \det C_n.$$

Applications of Hankel matrices
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired. The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization. The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions
The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.