Duality gap

In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If $$d^*$$ is the optimal dual value and $$p^*$$ is the optimal primal value then the duality gap is equal to $$p^* - d^*$$. This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds.

In general given two dual pairs separated locally convex spaces $$\left(X,X^*\right)$$ and $$\left(Y,Y^*\right)$$. Then given the function $$f: X \to \mathbb{R} \cup \{+\infty\}$$, we can define the primal problem by
 * $$\inf_{x \in X} f(x). \,$$

If there are constraint conditions, these can be built into the function $$f$$ by letting $$f = f + I_\text{constraints}$$ where $$I$$ is the indicator function. Then let $$F: X \times Y \to \mathbb{R} \cup \{+\infty\}$$ be a perturbation function such that $$F(x,0) = f(x)$$. The duality gap is the difference given by
 * $$\inf_{x \in X} [F(x,0)] - \sup_{y^* \in Y^*} [-F^*(0,y^*)]$$

where $$F^*$$ is the convex conjugate in both variables.

In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function.