Grothendieck inequality

In mathematics, the Grothendieck inequality states that there is a universal constant $$K_G$$ with the following property. If Mij is an n × n (real or complex) matrix with
 * $$\Big| \sum_{i,j} M_{ij} s_i t_j \Big| \le 1$$

for all (real or complex) numbers si, tj of absolute value at most 1, then
 * $$\Big| \sum_{i,j} M_{ij} \langle S_i, T_j \rangle \Big| \le K_G$$

for all vectors Si, Tj in the unit ball B(H) of a (real or complex) Hilbert space H, the constant $$K_G$$ being independent of n. For a fixed Hilbert space of dimension d, the smallest constant that satisfies this property for all n × n matrices is called a Grothendieck constant and denoted $$K_G(d)$$. In fact, there are two Grothendieck constants, $$K_G^{\mathbb R}(d)$$ and $$K_G^{\mathbb C}(d)$$, depending on whether one works with real or complex numbers, respectively.

The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953.

Motivation and the operator formulation
Let $$A = (a_{ij})$$ be an $$m \times n$$ matrix. Then $$A$$ defines a linear operator between the normed spaces $$(\mathbb R^m, \| \cdot \|_p)$$ and $$(\mathbb R^n, \| \cdot \|_q)$$ for $$1 \leq p, q \leq \infty$$. The $$(p \to q)$$-norm of $$A$$ is the quantity

$$\| A \|_{p \to q} = \max_{x \in \mathbb R^n : \| x \|_p = 1} \| Ax \|_q.$$

If $$p = q$$, we denote the norm by $$\| A \|_p$$.

One can consider the following question: For what value of $$p$$ and $$q$$ is $$\| A \|_{p \to q}$$ maximized? Since $$A$$ is linear, then it suffices to consider $$p$$ such that $$\{ x \in \mathbb R^n : \| x \|_p \leq 1 \}$$ contains as many points as possible, and also $$q$$ such that $$\| Ax \|_q$$ is as large as possible. By comparing $$\| x \|_p$$ for $$p = 1, 2, \ldots, \infty$$, one sees that $$\| A \|_{\infty \to 1} \geq \| A \|_{p \to q}$$ for all $$1 \leq p, q \leq \infty$$.

One way to compute $$\| A \|_{\infty \to 1}$$ is by solving the following quadratic integer program:

$$\begin{align} \max & \qquad \sum_{i, j} A_{ij} x_i y_j \\ \text{s.t.} & \qquad (x, y) \in \{ -1, 1 \}^{m + n} \end{align}$$

To see this, note that $$\sum_{i, j} A_{ij} x_i y_j = \sum_i (Ay)_i x_i$$, and taking the maximum over $$x \in \{ -1, 1 \}^m$$ gives $$\| Ay \|_1$$. Then taking the maximum over $$y \in \{ -1, 1 \}^n$$ gives $$\| A \|_{\infty \to 1}$$ by the convexity of $$\{ x \in \mathbb R^m : \| x \|_\infty = 1 \}$$ and by the triangle inequality. This quadratic integer program can be relaxed to the following semidefinite program:

$$\begin{align} \max & \qquad \sum_{i, j} A_{ij} \langle x^{(i)}, y^{(j)} \rangle \\ \text{s.t.} & \qquad x^{(1)}, \ldots, x^{(m)}, y^{(1)}, \ldots, y^{(n)} \text{ are unit vectors in } (\mathbb R^d, \| \cdot \|_2) \end{align}$$

It is known that exactly computing $$\| A \|_{p \to q}$$ for $$1 \leq q < p \leq \infty$$ is NP-hard, while exacting computing $$\| A \|_p$$ is NP-hard for $$p \not \in \{ 1, 2, \infty \}$$.

One can then ask the following natural question: How well does an optimal solution to the semidefinite program approximate $$\| A \|_{\infty \to 1}$$? The Grothendieck inequality provides an answer to this question: There exists a fixed constant $$C > 0$$ such that, for any $$m, n \geq 1$$, for any $$m \times n$$ matrix $$A$$, and for any Hilbert space $$H$$,

$$\max_{x^{(i)}, y^{(i)} \in H \text{ unit vectors}} \sum_{i, j} A_{ij} \left\langle x^{(i)}, y^{(j)} \right\rangle_H \leq C \| A \|_{\infty \to 1}.$$

Bounds on the constants
The sequences $$K_G^{\mathbb R}(d)$$ and $$K_G^{\mathbb C}(d)$$ are easily seen to be increasing, and Grothendieck's result states that they are bounded, so they have limits.

Grothendieck proved that $$1.57 \approx \frac{\pi}{2} \leq K_G^{\mathbb R} \leq \operatorname{sinh}\frac{\pi}{2} \approx 2.3,$$ where $$K_G^{\mathbb R}$$ is defined to be $$\sup_d K_G^{\mathbb R}(d)$$.

improved the result by proving that $$K_G^{\mathbb R} \le \frac{\pi}{2 \ln(1 + \sqrt{2})} \approx 1.7822$$, conjecturing that the upper bound is tight. However, this conjecture was disproved by.

Grothendieck constant of order d
Boris Tsirelson showed that the Grothendieck constants $$K_G^{\mathbb R}(d)$$ play an essential role in the problem of quantum nonlocality: the Tsirelson bound of any full correlation bipartite Bell inequality for a quantum system of dimension d is upperbounded by $$K_G^{\mathbb R}(2d^2)$$.

Lower bounds
Some historical data on best known lower bounds of $$K_G^{\mathbb R}(d)$$ is summarized in the following table.

Upper bounds
Some historical data on best known upper bounds of $$K_G^{\mathbb R}(d)$$:

Cut norm estimation
Given an $$m \times n$$ real matrix $$A = (a_{ij})$$, the cut norm of $$A$$ is defined by


 * $$\| A \|_\square = \max_{S \subset [m], T \subset [n]}  \left| \sum_{i \in S, j \in T} a_{ij} \right|.$$

The notion of cut norm is essential in designing efficient approximation algorithms for dense graphs and matrices. More generally, the definition of cut norm can be generalized for symmetric measurable functions $$W : [0, 1]^2 \to \mathbb R $$ so that the cut norm of $$W $$ is defined by


 * $$\| W \|_\square = \sup_{S, T \subset [0, 1]} \left| \int_{S \times T} W \right|. $$

This generalized definition of cut norm is crucial in the study of the space of graphons, and the two definitions of cut norm can be linked via the adjacency matrix of a graph.

An application of the Grothendieck inequality is to give an efficient algorithm for approximating the cut norm of a given real matrix $$A$$; specifically, given an $$m \times n$$ real matrix, one can find a number $$\alpha$$ such that

$$\| A \|_\square \leq \alpha \leq C \| A \|_\square,$$

where $$C$$ is an absolute constant. This approximation algorithm uses semidefinite programming.

We give a sketch of this approximation algorithm. Let $$B = (b_{ij})$$ be $$(m + 1) \times (n + 1)$$ matrix defined by

$$\begin{pmatrix} a_{11} & a_{12} & \ldots & a_{1n} & -\sum_{k = 1}^n a_{1k} \\ a_{21} & a_{22} & \ldots & a_{2n} & -\sum_{k = 1}^n a_{2k} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} & -\sum_{k = 1}^n a_{mk} \\ -\sum_{\ell = 1}^m a_{\ell 1} & -\sum_{\ell = 1}^m a_{\ell 2} & \ldots & -\sum_{\ell = 1}^m a_{\ell n} & \sum_{k = 1}^n \sum_{\ell = 1}^m a_{\ell k} \end{pmatrix}.$$

One can verify that $$\| A \|_\square = \| B \|_\square$$ by observing, if $$S \in [m + 1], T \in [n + 1]$$ form a maximizer for the cut norm of $$B$$, then

$$S^* = \begin{cases} S, & \text{if } m + 1 \not \in S, \\ {[m]} \setminus S, & \text{otherwise}, \end{cases} \qquad T^* = \begin{cases} T, & \text{if } n + 1 \not \in T, \\ {[n]} \setminus S, & \text{otherwise}, \end{cases} \qquad$$

form a maximizer for the cut norm of $$A$$. Next, one can verify that $$\| B \|_\square = \| B \|_{\infty \to 1}/4$$, where

$$\| B \|_{\infty \to 1} = \max \left\{ \sum_{i = 1}^{m + 1} \sum_{j = 1}^{n + 1} b_{ij} \varepsilon_i \delta_j : \varepsilon_1, \ldots, \varepsilon_{m + 1} \in \{ -1, 1 \}, \delta_1, \ldots, \delta_{n + 1} \in \{ -1, 1 \} \right\}.$$

Although not important in this proof, $$\| B \|_{\infty \to 1}$$ can be interpreted to be the norm of $$B$$ when viewed as a linear operator from $$\ell_\infty^m$$ to $$\ell_1^m$$.

Now it suffices to design an efficient algorithm for approximating $$\| A \|_{\infty \to 1}$$. We consider the following semidefinite program:

$$\text{SDP}(A) = \max \left\{ \sum_{i = 1}^m \sum_{j = 1}^n a_{ij} \left\langle x_i, y_j \right\rangle : x_1, \ldots, x_m, y_1, \ldots, y_n \in S^{n + m - 1} \right\}.$$

Then $$\text{SDP}(A) \geq \| A \|_{\infty \to 1}$$. The Grothedieck inequality implies that $$\text{SDP}(A) \leq K_G^{\mathbb R} \| A \|_{\infty \to 1}$$. Many algorithms (such as interior-point methods, first-order methods, the bundle method, the augmented Lagrangian method) are known to output the value of a semidefinite program up to an additive error $$\varepsilon$$ in time that is polynomial in the program description size and $$\log (1/\varepsilon)$$. Therefore, one can output $$\alpha = \text{SDP}(B)$$ which satisfies

$$\| A \|_\square \leq \alpha \leq C \| A \|_\square \qquad \text{with} \qquad C = K_G^{\mathbb R}. $$

Szemerédi's regularity lemma
Szemerédi's regularity lemma is a useful tool in graph theory, asserting (informally) that any graph can be partitioned into a controlled number of pieces that interact with each other in a pseudorandom way. Another application of the Grothendieck inequality is to produce a partition of the vertex set that satisfies the conclusion of Szemerédi's regularity lemma, via the cut norm estimation algorithm, in time that is polynomial in the upper bound of Szemerédi's regular partition size but independent of the number of vertices in the graph.

It turns out that the main "bottleneck" of constructing a Szemeredi's regular partition in polynomial time is to determine in polynomial time whether or not a given pair $$(X, Y)$$ is close to being $$\varepsilon$$-regular, meaning that for all $$S \subset X, T \subset Y$$ with $$|S| \geq \varepsilon |X|, |T| \geq \varepsilon |Y|$$, we have

$$\left| \frac{e(S, T)}{|S||T|} - \frac{e(X, Y)}{|X||Y|} \right| \leq \varepsilon,$$

where $$e(X', Y') = |\{ (u, v) \in X' \times Y' : uv \in E \}|$$ for all $$X', Y' \subset V$$ and $$V, E$$ are the vertex and edge sets of the graph, respectively. To that end, we construct an $$n \times n$$ matrix $$A = (a_{xy})_{(x, y) \in X \times Y}$$, where $$n = |V|$$, defined by

$$a_{xy} = \begin{cases} 1 - \frac{e(X, Y)}{|X||Y|}, & \text{if } xy \in E, \\ -\frac{e(X, Y)}{|X||Y|}, & \text{otherwise}. \end{cases}$$

Then for all $$S \subset X, T \subset Y$$,

$$\left| \sum_{x \in S, y \in T} a_{xy} \right| = |S||T| \left| \frac{e(S, T)}{|S||T|} - \frac{e(X, Y)}{|X||Y|} \right|.$$

Hence, if $$(X, Y)$$ is not $$\varepsilon$$-regular, then $$\| A \|_\square \geq \varepsilon^3 n^2$$. It follows that using the cut norm approximation algorithm together with the rounding technique, one can find in polynomial time $$S \subset X, T \subset Y$$ such that

$$\min\left\{ n|S|, n|T|, n^2 \left| \frac{e(S, T)}{|S||T|} - \frac{e(X, Y)}{|X||Y|} \right| \right\} \geq \left|\sum_{x \in S, y \in T} a_{xy}\right| \geq \frac{1}{K_G^{\mathbb R}} \varepsilon^3 n^2 \geq \frac{1}{2} \varepsilon^3 n^2.$$

Then the algorithm for producing a Szemerédi's regular partition follows from the constructive argument of Alon et al.

Grothendieck inequality of a graph
The Grothendieck inequality of a graph states that for each $$n \in \mathbb N$$ and for each graph $$G = (\{ 1, \ldots, n \}, E)$$ without self loops, there exists a universal constant $$K > 0$$ such that every $$n \times n$$ matrix $$A = (a_{ij})$$ satisfies that

$$\max_{x_1, \ldots, x_n \in S^{n - 1}} \sum_{ij \in E} a_{ij} \left\langle x_i, x_j \right\rangle \leq K \max_{\varepsilon_1, \ldots, \varepsilon_n \in \{ -1, 1 \}} \sum_{ij \in E} a_{ij} \varepsilon_1 \varepsilon_n.$$

The Grothendieck constant of a graph $$G$$, denoted $$K(G)$$, is defined to be the smallest constant $$K$$ that satisfies the above property.

The Grothendieck inequality of a graph is an extension of the Grothendieck inequality because the former inequality is the special case of the latter inequality when $$G$$ is a bipartite graph with two copies of $$\{ 1, \ldots, n \}$$ as its bipartition classes. Thus,

$$K_G = \sup_{n \in \mathbb N} \{ K(G) : G \text{ is an } n \text{-vertex bipartite graph} \}.$$

For $$G = K_n$$, the $$n$$-vertex complete graph, the Grothendieck inequality of $$G$$ becomes

$$\max_{x_1, \ldots, x_n \in S^{n - 1}} \sum_{i, j \in \{ 1, \ldots, n \}, i \neq j} a_{ij} \left\langle x_i, x_j \right\rangle \leq K(K_n) \max_{\varepsilon_1, \ldots, \varepsilon_n \in \{ -1, 1 \}} \sum_{i, j \in \{ 1, \ldots, n \}, i \neq j} a_{ij} \varepsilon_i \varepsilon_j.$$

It turns out that $$K(K_n) \asymp \log n$$. On one hand, we have $$K(K_n) \lesssim \log n$$. Indeed, the following inequality is true for any $$n \times n$$ matrix $$A = (a_{ij})$$, which implies that $$K(K_n) \lesssim \log n$$ by the Cauchy-Schwarz inequality:

$$\max_{x_1, \ldots, x_n \in S^{n - 1}} \sum_{i, j \in \{ 1, \ldots, n \}, i \neq j} a_{ij} \left\langle x_i, x_j \right\rangle \leq \log\left(\frac{\sum_{i \in \{ 1, \ldots, n \}} \sum_{j \in \{ 1, \ldots, n \} \setminus \{ i \}} |a_{ij}|}{\sqrt{\sum_{i \in \{ 1, \ldots, n \}} \sum_{j \in \{ 1, \ldots, n \} \setminus \{ i \}} a_{ij}^2}}\right) \max_{\varepsilon_1, \ldots, \varepsilon_n \in \{ -1, 1 \}} \sum_{i, j \in \{ 1, \ldots, n \}, i \neq j} a_{ij} \varepsilon_1 \varepsilon_n.$$

On the other hand, the matching lower bound $$K(K_n) \gtrsim \log n$$ is due to Alon, Makarychev, Makarychev and Naor in 2006.

The Grothendieck inequality $$K(G)$$ of a graph $$G$$ depends upon the structure of $$G$$. It is known that

$$\log \omega \lesssim K(G) \lesssim \log \vartheta,$$

and

$$K(G) \leq \frac{\pi}{2\log\left(\frac{1 + \sqrt{(\vartheta - 1)^2 + 1}}{\vartheta - 1}\right)},$$

where $$\omega$$ is the clique number of $$G$$, i.e., the largest $$k \in \{ 2, \ldots, n \}$$ such that there exists $$S \subset \{ 1, \ldots, n \}$$ with $$|S| = k$$ such that $$ij \in E$$ for all distinct $$i, j \in S$$, and

$$\vartheta = \min \left\{ \max_{i \in \{ 1, \ldots, n \}} \frac{1}{\langle x_i, y \rangle} : x_1, \ldots, x_n, y \in S^n, \left\langle x_i, x_j \right\rangle = 0 \;\forall ij \in E \right\}.$$

The parameter $$\vartheta$$ is known as the Lovász theta function of the complement of $$G$$.

L^p Grothendieck inequality
In the application of the Grothendieck inequality for approximating the cut norm, we have seen that the Grothendieck inequality answers the following question: How well does an optimal solution to the semidefinite program $$\text{SDP}(A)$$ approximate $$\| A \|_{\infty \to 1}$$, which can be viewed as an optimization problem over the unit cube? More generally, we can ask similar questions over convex bodies other than the unit cube.

For instance, the following inequality is due to Naor and Schechtman and independently due to Guruswami et al: For every $$n \times n$$ matrix $$A = (a_{ij})$$ and every $$p \geq 2$$,

$$\max_{x_1, \ldots, x_n \in \mathbb R^n, \sum_{k = 1}^n \| x_k \|_2^p \leq 1} \sum_{i = 1}^n \sum_{j = 1}^n a_{ij} \left\langle x_i, x_j \right\rangle \leq \gamma_p^2 \max_{t_1, \ldots, t_n \in \mathbb R, \sum_{k = 1}^n | t_k |^p \leq 1} \sum_{i = 1}^n \sum_{j = 1}^n a_{ij} t_i t_j,$$

where

$$\gamma_p = \sqrt{2} \left(\frac{\Gamma((p + 1)/2)}{\sqrt{\pi}}\right)^{1/p}.$$

The constant $$\gamma_p^2$$ is sharp in the inequality. Stirling's formula implies that $$\gamma_p^2 = p/e + O(1)$$ as $$p \to \infty$$.