Schur product theorem

In mathematics, particularly in linear algebra, the Schur product theorem states that the Hadamard product of two positive definite matrices is also a positive definite matrix. The result is named after Issai Schur (Schur 1911, p. 14, Theorem VII) (note that Schur signed as J. Schur in Journal für die reine und angewandte Mathematik. )

We remark that the converse of the theorem holds in the following sense. If $$M$$ is a symmetric matrix and the Hadamard product $$M \circ N$$ is positive definite for all positive definite matrices $$N$$, then $$M$$ itself is positive definite.

Proof using the trace formula
For any matrices $$M$$ and $$N$$, the Hadamard product $$M \circ N$$ considered as a bilinear form acts on vectors $$a, b$$ as
 * $$a^* (M \circ N) b = \operatorname{tr}\left(M^\textsf{T} \operatorname{diag}\left(a^*\right) N \operatorname{diag}(b)\right)$$

where $$\operatorname{tr}$$ is the matrix trace and $$\operatorname{diag}(a)$$ is the diagonal matrix having as diagonal entries the elements of $$a$$.

Suppose $$M$$ and $$N$$ are positive definite, and so Hermitian. We can consider their square-roots $$M^\frac{1}{2}$$ and $$N^\frac{1}{2}$$, which are also Hermitian, and write

\operatorname{tr}\left(M^\textsf{T} \operatorname{diag}\left(a^*\right) N \operatorname{diag}(b)\right) = \operatorname{tr}\left(\overline{M}^\frac{1}{2} \overline{M}^\frac{1}{2} \operatorname{diag}\left(a^*\right) N^\frac{1}{2} N^\frac{1}{2} \operatorname{diag}(b)\right) = \operatorname{tr}\left(\overline{M}^\frac{1}{2} \operatorname{diag}\left(a^*\right) N^\frac{1}{2} N^\frac{1}{2} \operatorname{diag}(b) \overline{M}^\frac{1}{2}\right) $$

Then, for $$a = b$$, this is written as $$\operatorname{tr}\left(A^* A\right)$$ for $$A = N^\frac{1}{2} \operatorname{diag}(a) \overline{M}^\frac{1}{2}$$ and thus is strictly positive for $$A \neq 0$$, which occurs if and only if $$a \neq 0$$. This shows that $$(M \circ N)$$ is a positive definite matrix.

Case of M = N
Let $$X$$ be an $$n$$-dimensional centered Gaussian random variable with covariance $$\langle X_i X_j \rangle = M_{ij}$$. Then the covariance matrix of $$X_i^2$$ and $$X_j^2$$ is


 * $$\operatorname{Cov}\left(X_i^2, X_j^2\right) = \left\langle X_i^2 X_j^2 \right\rangle - \left\langle X_i^2 \right\rangle \left\langle X_j^2 \right\rangle$$

Using Wick's theorem to develop $$\left\langle X_i^2 X_j^2 \right\rangle = 2 \left\langle X_i X_j \right\rangle^2 + \left\langle X_i^2 \right\rangle \left\langle X_j^2 \right\rangle$$ we have


 * $$\operatorname{Cov}\left(X_i^2, X_j^2\right) = 2 \left\langle X_i X_j \right\rangle^2 = 2 M_{ij}^2$$

Since a covariance matrix is positive definite, this proves that the matrix with elements $$M_{ij}^2$$ is a positive definite matrix.

General case
Let $$X$$ and $$Y$$ be $$n$$-dimensional centered Gaussian random variables with covariances $$\left\langle X_i X_j \right\rangle = M_{ij}$$, $$\left\langle Y_i Y_j \right\rangle = N_{ij}$$ and independent from each other so that we have
 * $$\left\langle X_i Y_j \right\rangle = 0$$ for any $$i, j$$

Then the covariance matrix of $$X_i Y_i$$ and $$X_j Y_j$$ is
 * $$\operatorname{Cov}\left(X_i Y_i, X_j Y_j\right) = \left\langle X_i Y_i X_j Y_j \right\rangle - \left\langle X_i Y_i \right\rangle \left\langle X_j Y_j \right\rangle$$

Using Wick's theorem to develop
 * $$\left\langle X_i Y_i X_j Y_j \right\rangle = \left\langle X_i X_j \right\rangle \left\langle Y_i Y_j \right\rangle + \left\langle X_i Y_i \right\rangle \left\langle X_j Y_j \right\rangle + \left\langle X_i Y_j \right\rangle \left\langle X_j Y_i \right\rangle$$

and also using the independence of $$X$$ and $$Y$$, we have
 * $$\operatorname{Cov}\left(X_i Y_i, X_j Y_j\right) = \left\langle X_i X_j \right\rangle \left\langle Y_i Y_j \right\rangle = M_{ij} N_{ij}$$

Since a covariance matrix is positive definite, this proves that the matrix with elements $$M_{ij} N_{ij}$$ is a positive definite matrix.

Proof of positive semidefiniteness
Let $$M = \sum \mu_i m_i m_i^\textsf{T}$$ and $$N = \sum \nu_i n_i n_i^\textsf{T}$$. Then
 * $$M \circ N = \sum_{ij} \mu_i \nu_j \left(m_i m_i^\textsf{T}\right) \circ \left(n_j n_j^\textsf{T}\right) = \sum_{ij} \mu_i \nu_j \left(m_i \circ n_j\right) \left(m_i \circ n_j\right)^\textsf{T}$$

Each $$\left(m_i \circ n_j\right) \left(m_i \circ n_j\right)^\textsf{T}$$ is positive semidefinite (but, except in the 1-dimensional case, not positive definite, since they are rank 1 matrices). Also, $$\mu_i \nu_j > 0$$ thus the sum $$M \circ N$$ is also positive semidefinite.

Proof of definiteness
To show that the result is positive definite requires even further proof. We shall show that for any vector $$a \neq 0$$, we have $$a^\textsf{T} (M \circ N) a > 0$$. Continuing as above, each $$a^\textsf{T} \left(m_i \circ n_j\right) \left(m_i \circ n_j\right)^\textsf{T} a \ge 0$$, so it remains to show that there exist $$i$$ and $$j$$ for which corresponding term above is nonzero. For this we observe that
 * $$a^\textsf{T} (m_i \circ n_j) (m_i \circ n_j)^\textsf{T} a = \left(\sum_k m_{i,k} n_{j,k} a_k\right)^2$$

Since $$N$$ is positive definite, there is a $$j$$ for which $$n_j \circ a \neq 0$$ (since otherwise $$n_j^\textsf{T} a = \sum_k (n_j \circ a)_k = 0$$ for all $$j$$), and likewise since $$M$$ is positive definite there exists an $$i$$ for which $$\sum_k m_{i,k} (n_j \circ a)_k = m_i^\textsf{T} (n_j \circ a) \neq 0.$$ However, this last sum is just $$\sum_k m_{i,k} n_{j,k} a_k$$. Thus its square is positive. This completes the proof.