Non-negative least squares

In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix $A$ and a (column) vector of response variables $y$, the goal is to find


 * $$\operatorname{arg\,min}\limits_\mathbf{x} \|\mathbf{Ax} - \mathbf{y}\|_2^2$$ subject to $x ≥ 0$.

Here $x ≥ 0$ means that each component of the vector $x$ should be non-negative, and $‖·‖_{2}$ denotes the Euclidean norm.

Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC and non-negative matrix/tensor factorization. The latter can be considered a generalization of NNLS.

Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds $α_{i} ≤ x_{i} ≤ β_{i}$.

Quadratic programming version
The NNLS problem is equivalent to a quadratic programming problem


 * $$\operatorname{arg\,min}\limits_\mathbf{x \ge 0} \left(\frac{1}{2} \mathbf{x}^\mathsf{T} \mathbf{Q}\mathbf{x} + \mathbf{c}^\mathsf{T} \mathbf{x}\right),$$

where $Q$ = $A^{T}A$ and $c$ = $−A^{T} y$. This problem is convex, as $Q$ is positive semidefinite and the non-negativity constraints form a convex feasible set.

Algorithms
The first widely used algorithm for solving this problem is an active-set method published by Lawson and Hanson in their 1974 book Solving Least Squares Problems. In pseudocode, this algorithm looks as follows:


 * Inputs:
 * a real-valued matrix $A$ of dimension $m × n$,
 * a real-valued vector $y$ of dimension $m$,
 * a real value $ε$, the tolerance for the stopping criterion.
 * Initialize:
 * Set $P = ∅$.
 * Set $R = {1, ..., n}$.
 * Set $x$ to an all-zero vector of dimension $n$.
 * Set $w = A^{T}(y − Ax)$.
 * Let $w^{R}$ denote the sub-vector with indexes from R
 * Main loop: while $R ≠ ∅$ and $max(w^{R}) > ε$:
 * Let $j$ in $R$ be the index of $max(w^{R})$ in $w$.
 * Add $j$ to $P$.
 * Remove $j$ from $R$.
 * Let $A^{P}$ be $A$ restricted to the variables included in $P$.
 * Let $s$ be vector of same length as $x$. Let $s^{P}$ denote the sub-vector with indexes from P, and let $s^{R}$ denote the sub-vector with indexes from R.
 * Set $s^{P} = ((A^{P})^{T} A^{P})^{−1} (A^{P})^{T}y$
 * Set $s^{R}$ to zero
 * While $min(s^{P}) ≤ 0$:
 * Let $α = min x_{i}⁄x_{i} − s_{i} for i in P where s_{i} ≤ 0$.
 * Set $x$ to $x + α(s − x)$.
 * Move to $R$ all indices $j$ in $P$ such that $x_{j} ≤ 0$.
 * Set $s^{P} = ((A^{P})^{T} A^{P})^{−1} (A^{P})^{T}y$
 * Set $s^{R}$ to zero.
 * Set $x$ to $s$.
 * Set $w$ to $A^{T}(y − Ax)$.
 * Output: x

This algorithm takes a finite number of steps to reach a solution and smoothly improves its candidate solution as it goes (so it can find good approximate solutions when cut off at a reasonable number of iterations), but is very slow in practice, owing largely to the computation of the pseudoinverse $((A^{P})^{T} A^{P})^{−1}$. Variants of this algorithm are available in MATLAB as the routine lsqnonneg and in SciPy as optimize.nnls.

Many improved algorithms have been suggested since 1974. Fast NNLS (FNNLS) is an optimized version of the Lawson—Hanson algorithm. Other algorithms include variants of Landweber's gradient descent method and coordinate-wise optimization based on the quadratic programming problem above.