Farkas' lemma

In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization (alternatively, mathematical programming). It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements.

Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities, i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution.

Statement of the lemma
There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951).

Here, the notation $$\mathbf{x} \geq 0$$ means that all components of the vector $$\mathbf{x}$$ are nonnegative.

Example
Let $m, n = 2$, $$\mathbf{A} = \begin{bmatrix}6 & 4 \\3 & 0\end{bmatrix},$$ and $$\mathbf{b} = \begin{bmatrix}b_1 \\b_2\end{bmatrix}.$$ The lemma says that exactly one of the following two statements must be true (depending on $b_{1}$ and $b_{2}$):
 * 1) There exist $x_{1} ≥ 0$, $x_{2} ≥ 0$ such that $6x_{1} + 4x_{2} = b_{1}$ and $3x_{1} = b_{2}$,  or
 * 2) There exist $y_{1}, y_{2}$ such that $6y_{1} + 3y_{2} ≥ 0$, $4y_{1} ≥ 0$, and $b_{1} y_{1} + b_{2} y_{2} < 0$.

Here is a proof of the lemma in this special case:
 * If $b_{2} ≥ 0$ and $b_{1} − 2b_{2} ≥ 0$, then option 1 is true, since the solution of the linear equations is $$x_1 = \tfrac{b_2}{3}$$ and $$x_2 = \tfrac{b_1 - 2b_2}{4}.$$ Option 2 is false, since $$b_1 y_1 + b_2 y_2 \ge b_2 (2y_1 + y_2) = b_2 \frac{6y_1 + 3y_2}{3},$$ so if the right-hand side is positive, the left-hand side must be positive too.
 * Otherwise, option 1 is false, since the unique solution of the linear equations is not weakly positive. But in this case, option 2 is true:
 * If $b_{2} < 0$, then we can take e.g. $y_{1} = 0$ and $y_{2} = 1$.
 * If $b_{1} − 2b_{2} < 0$, then, for some number $B > 0$, $b_{1} = 2b_{2} − B$, so: $$b_1 y_1 + b_2 y_2 = 2b_2 y_1 + b_2 y_2 - By_1 = b_2 \frac{6y_1 + 3y_2}{3} - By_1.$$ Thus we can take, for example, $y_{1} = 1$, $y_{2} = −2$.

Geometric interpretation
Consider the closed convex cone $$C(\mathbf{A})$$ spanned by the columns of $A$; that is,



C(\mathbf{A}) = \{\mathbf{A}\mathbf{x} \mid \mathbf{x} \geq 0 \}. $$

Observe that $$C(\mathbf{A})$$ is the set of the vectors $b$ for which the first assertion in the statement of Farkas' lemma holds. On the other hand, the vector $y$ in the second assertion is orthogonal to a hyperplane that separates $b$ and $$C(\mathbf{A}).$$ The lemma follows from the observation that $b$ belongs to $$C(\mathbf{A})$$ if and only if there is no hyperplane that separates it from $$C(\mathbf{A}).$$

More precisely, let $$\mathbf{a}_1, \dots, \mathbf{a}_n \in \R^m$$ denote the columns of $A$. In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true:
 * 1) There exist non-negative coefficients $$x_1, \dots, x_n \in \R$$ such that $$\mathbf{b} =x _1 \mathbf{a}_1 + \dots + x_n \mathbf{a}_n.$$
 * 2) There exists a vector $$\mathbf{y} \in \R^m$$ such that $$\mathbf{a}_i^\top \mathbf{y} \geq 0$$ for $$i = 1, \dots, n,$$ and $$\mathbf{b}^\top \mathbf{y} < 0.$$

The sums $$x_1 \mathbf{a}_1 + \dots + x_n \mathbf{a}_n$$ with nonnegative coefficients $$x_1, \dots, x_n$$ form the cone spanned by the columns of $A$. Therefore, the first statement tells that $b$ belongs to $$C(\mathbf{A}).$$

The second statement tells that there exists a vector $y$ such that the angle of $y$ with the vectors $ai$ is at most 90°, while the angle of $y$ with the vector $b$ is more than 90°. The hyperplane normal to this vector has the vectors $ai$ on one side and the vector $b$ on the other side. Hence, this hyperplane separates the cone spanned by $$\mathbf{a}_1, \dots, \mathbf{a}_n$$ from the vector $b$.

For example, let $n, m = 2$, $a_{1} = (1, 0)^{T}$, and $a_{2} = (1, 1)^{T}$. The convex cone spanned by $a_{1}$ and $a_{2}$ can be seen as a wedge-shaped slice of the first quadrant in the $xy$ plane. Now, suppose $b = (0, 1)$. Certainly, $b$ is not in the convex cone $a_{1}x_{1} + a_{2}x_{2}$. Hence, there must be a separating hyperplane. Let $y = (1, −1)^{T}$. We can see that $a_{1} · y = 1$, $a_{2} · y = 0$, and $b · y = −1$. Hence, the hyperplane with normal $y$ indeed separates the convex cone $a_{1}x_{1} + a_{2}x_{2}$ from $b$.

Logic interpretation
A particularly suggestive and easy-to-remember version is the following: if a set of linear inequalities has no solution, then a contradiction can be produced from it by linear combination with nonnegative coefficients. In formulas: if $$\mathbf{Ax} \le \mathbf{b}$$ is unsolvable then $$\mathbf{y}^\top \mathbf{A} = 0,$$ $$\mathbf{y}^\top \mathbf{b} = -1,$$ $$\mathbf{y} \ge 0$$ has a solution. Note that $$\mathbf{y}^\top \mathbf{A}$$ is a combination of the left-hand sides, $$\mathbf{y}^\top \mathbf{b}$$ a combination of the right-hand side of the inequalities. Since the positive combination produces a zero vector on the left and a −1 on the right, the contradiction is apparent.

Thus, Farkas' lemma can be viewed as a theorem of logical completeness: $$\mathbf{Ax} \le \mathbf{b}$$ is a set of "axioms", the linear combinations are the "derivation rules", and the lemma says that, if the set of axioms is inconsistent, then it can be refuted using the derivation rules.

Implications in complexity theory
Farkas' lemma implies that the decision problem "Given a system of linear equations, does it have a non-negative solution?" is in the intersection of NP and co-NP. This is because, according to the lemma, both a "yes" answer and a "no" answer have a proof that can be verified in polynomial time. The problems in the intersection $$NP \cap coNP$$ are also called well-characterized problems. It is a long-standing open question whether $$NP \cap coNP$$ is equal to P. In particular, the question of whether a system of linear equations has a non-negative solution was not known to be in P, until it was proved using the ellipsoid method.

Variants
The Farkas Lemma has several variants with different sign constraints (the first one is the original version):
 * Either $$\mathbf{Ax} = \mathbf{b}$$ has a solution $$\mathbf{x} \geq 0,$$ or $$\mathbf{A}^\top\mathbf{y} \geq 0$$ has a solution $$\mathbf{y} \in \R^m$$ with $$\mathbf{b}^\top \mathbf{y} < 0.$$
 * Either $$\mathbf{Ax} \leq \mathbf{b}$$ has a solution $$\mathbf{x} \geq 0,$$ or $$\mathbf{A}^\top\mathbf{y} \geq 0$$ has a solution $$ \mathbf{y} \geq 0$$ with $$\mathbf{b}^\top \mathbf{y} < 0 $$
 * Either $$\mathbf{Ax} \leq \mathbf{b}$$ has a solution $$\mathbf{x} \in \R^n,$$ or $$\mathbf{A}^\top\mathbf{y} = 0$$ has a solution $$ \mathbf{y} \geq 0$$ with $$\mathbf{b}^\top \mathbf{y} < 0$$.
 * Either $$\mathbf{Ax} = \mathbf{b}$$ has a solution $$\mathbf{x} \in \R^n,$$ or $$\mathbf{A}^\top\mathbf{y} = 0$$ has a solution $$\mathbf{y} \in \R^m$$ with $$\mathbf{b}^\top \mathbf{y} \neq 0.$$

The latter variant is mentioned for completeness; it is not actually a "Farkas lemma" since it contains only equalities. Its proof is an exercise in linear algebra.

There are also Farkas-like lemmas for integer programs. For systems of equations, the lemma is simple:

For system of inequalities, the lemma is much more complicated. It is based on the following two rules of inference:
 * Assume that A and b have rational coefficients. Then either $$\mathbf{Ax} = \mathbf{b}$$ has an integral solution $$\mathbf{x} \in \Z^n,$$or there exists $$\mathbf{y} \in \R^n$$such that $$\mathbf{A}^\top\mathbf{y}$$ is integral and $$\mathbf{b}^\top \mathbf{y}$$ is not integral.

The lemma says that:
 * 1) Given inequalities $$a_1^T x \leq b_1,\ldots, a_m^T x \leq b_m$$ and coefficients $$w_1,\ldots,w_m$$, infer the inequality $$\left(\sum_{i=1}^m w_i a_i^T\right) x \leq \sum_{i=1}^m w_i b_i$$.
 * 2) Given an inequality $$a_1 x_1  + \cdots + a_m x_m \leq b$$, infer the inequality  $$\lfloor a_1\rfloor  x_1  + \cdots + \lfloor a_m \rfloor x_m \leq \lfloor b \rfloor$$.

The variants are summarized in the table below.
 * Assume that A and b have rational coefficients. Then either $$\mathbf{Ax} \leq \mathbf{b}$$ has an integral solution $$\mathbf{x} \in \Z^n, x\geq 0$$, or it is possible to infer from $$\mathbf{Ax} \leq \mathbf{b}$$ using finitely many applications of inference rules 1,2 the inequality $$0^T x \leq -1$$.

Generalizations
Generalized Farkas' lemma can be interpreted geometrically as follows: either a vector is in a given closed convex cone, or there exists a hyperplane separating the vector from the cone; there are no other possibilities. The closedness condition is necessary, see Separation theorem I in Hyperplane separation theorem. For original Farkas' lemma, $$\mathbf{S}$$ is the nonnegative orthant $$\R_+^n,$$ hence the closedness condition holds automatically. Indeed, for polyhedral convex cone, i.e., there exists a $$\mathbf{B} \in \R^{n \times k}$$  such that $$\mathbf{S} = \{\mathbf{B}\mathbf{x} \mid \mathbf{x} \in \R_+^k \} ,$$ the closedness condition holds automatically. In convex optimization, various kinds of constraint qualification, e.g. Slater's condition, are responsible for closedness of the underlying convex cone $$C(\mathbf{A}).$$

By setting $$\mathbf{S} = \R^n$$ and $$\mathbf{S}^* = \{0\}$$ in generalized Farkas' lemma, we obtain the following corollary about the solvability for a finite system of linear equalities:

Further implications
Farkas's lemma can be varied to many further theorems of alternative by simple modifications, such as Gordan's theorem: Either $$\mathbf{Ax} < 0$$ has a solution $x$, or $$\mathbf{A}^\top \mathbf{y} = 0 $$ has a nonzero solution $y$ with $y ≥ 0$.

Common applications of Farkas' lemma include proving the strong duality theorem associated with linear programming and the Karush–Kuhn–Tucker conditions. An extension of Farkas' lemma can be used to analyze the strong duality conditions for and construct the dual of a semidefinite program. It is sufficient to prove the existence of the Karush–Kuhn–Tucker conditions using the Fredholm alternative but for the condition to be necessary, one must apply von Neumann's minimax theorem to show the equations derived by Cauchy are not violated.