LP-type problem

In the study of algorithms, an LP-type problem (also called a generalized linear program) is an optimization problem that shares certain properties with low-dimensional linear programs and that may be solved by similar algorithms. LP-type problems include many important optimization problems that are not themselves linear programs, such as the problem of finding the smallest circle containing a given set of planar points. They may be solved by a combination of randomized algorithms in an amount of time that is linear in the number of elements defining the problem, and subexponential in the dimension of the problem.

Definition
LP-type problems were defined by as problems in which one is given as input a finite set $S$ of elements, and a function $f$ that maps subsets of $S$ to values from a totally ordered set. The function is required to satisfy two key properties:
 * Monotonicity: for every two sets $A ⊆ B ⊆ S$, f(A) ≤ f(B) ≤ f(S).
 * Locality: for every two sets $A ⊆ B ⊆ S$ and every element $x$ in $S$, if $f(A) = f(B) = f(A ∪ {x})$, then $f(A) = f(B ∪ {x})$.

A basis of an LP-type problem is a set $B ⊆ S$ with the property that every proper subset of $B$ has a smaller value of $f$ than $B$ itself, and the dimension (or combinatorial dimension) of an LP-type problem is defined to be the maximum cardinality of a basis.

It is assumed that an optimization algorithm may evaluate the function $f$ only on sets that are themselves bases or that are formed by adding a single element to a basis. Alternatively, the algorithm may be restricted to two primitive operations: a violation test that determines, for a basis $B$ and an element $x$ whether $f(B) = f(B ∪ {x})$, and a basis computation that (with the same inputs) finds a basis of $B ∪ {x}$. The task for the algorithm to perform is to evaluate $f(S)$ by only using these restricted evaluations or primitives.

Examples and applications
A linear program may be defined by a system of $d$ non-negative real variables, subject to $n$ linear inequality constraints, together with a non-negative linear objective function to be minimized. This may be placed into the framework of LP-type problems by letting $S$ be the set of constraints, and defining $f(A)$ (for a subset $A$ of the constraints) to be the minimum objective function value of the smaller linear program defined by $A$. With suitable general position assumptions (in order to prevent multiple solution points having the same optimal objective function value), this satisfies the monotonicity and locality requirements of an LP-type problem, and has combinatorial dimension equal to the number $d$ of variables. Similarly, an integer program (consisting of a collection of linear constraints and a linear objective function, as in a linear program, but with the additional restriction that the variables must take on only integer values) satisfies both the monotonicity and locality properties of an LP-type problem, with the same general position assumptions as for linear programs. Theorems of and  show that, for an integer program with $d$ variables, the combinatorial dimension is at most $2^{d}$.

Many natural optimization problems in computational geometry are LP-type:
 * The smallest circle problem is the problem of finding the minimum radius of a circle containing a given set of $n$ points in the plane. It satisfies monotonicity (adding more points can only make the circle larger) and locality (if the smallest circle for set $A$ contains $B$ and $x$, then the same circle also contains $B ∪ {x}$). Because the smallest circle is always determined by some three points, the smallest circle problem has combinatorial dimension three, even though it is defined using two-dimensional Euclidean geometry. More generally, the smallest enclosing ball of points in $d$ dimensions forms an LP-type problem of combinatorial dimension $d + 1$. The smallest circle problem can be generalized to the smallest ball enclosing a set of balls, to the smallest ball that touches or surrounds each of a set of balls, to the weighted 1-center problem, or to similar smaller enclosing ball problems in non-Euclidean spaces such as the space with distances defined by Bregman divergence. The related problem of finding the smallest enclosing ellipsoid is also an LP-type problem, but with a larger combinatorial dimension, $d(d + 3)/2$.
 * Let $K_{0}, K_{1}, ...$ be a sequence of $n$ convex sets in $d$-dimensional Euclidean space, and suppose that we wish to find the longest prefix of this sequence that has a common intersection point. This may be expressed as an LP-type problem in which $f(A) = &minus;i$ where Ki is the first member of A that does not belong to an intersecting prefix of A, and where $f(A) = &minus;n$ if there is no such member. The combinatorial dimension of this system is $d + 1$.
 * Suppose we are given a collection of axis-aligned rectangular boxes in three-dimensional space, and wish to find a line directed into the positive octant of space that cuts through all the boxes. This may be expressed as an LP-type problem with combinatorial dimension 4.
 * The problem of finding the closest distance between two convex polytopes, specified by their sets of vertices, may be represented as an LP-type problem. In this formulation, the set $S$ is the set of all vertices in both polytopes, and the function value $f(A)$ is the negation of the smallest distance between the convex hulls of the two subsets $A$ of vertices in the two polytopes. The combinatorial dimension of the problem is $d + 1$ if the two polytopes are disjoint, or $d + 2$ if they have a nonempty intersection.
 * Let $S = {f_{0}, f_{1}, ...}$ be a set of quasiconvex functions. Then the pointwise maximum $max_{i} f_{i}$ is itself quasiconvex, and the problem of finding the minimum value of $max_{i} f_{i}$ is an LP-type problem. It has combinatorial dimension at most $2d + 1$, where $d$ is the dimension of the domain of the functions, but for sufficiently smooth functions the combinatorial dimension is smaller, at most $d + 1$. Many other LP-type problems can also be expressed using quasiconvex functions in this way; for instance, the smallest enclosing circle problem is the problem of minimizing $max_{i} f_{i}$ where each of the functions $f_{i}$ measures the Euclidean distance from one of the given points.

LP-type problems have also been used to determine the optimal outcomes of certain games in algorithmic game theory, improve vertex placement in finite element method meshes, solve facility location problems, analyze the time complexity of certain exponential-time search algorithms, and reconstruct the three-dimensional positions of objects from their two-dimensional images.

Seidel
gave an algorithm for low-dimensional linear programming that may be adapted to the LP-type problem framework. Seidel's algorithm takes as input the set $S$ and a separate set $X$ (initially empty) of elements known to belong to the optimal basis. It then considers the remaining elements one-by-one in a random order, performing violation tests for each one and, depending on the result, performing a recursive call to the same algorithm with a larger set of known basis elements. It may be expressed with the following pseudocode:

function seidel(S, f, X) is R := empty set B := X for x in a random permutation of S: if f(B) ≠ f(B ∪ {x}): B := seidel(R, f, X ∪ {x}) R := R ∪ {x} return B

In a problem with combinatorial dimension $d$, the violation test in the $i$th iteration of the algorithm fails only when $x$ is one of the $d &minus; |X|$ remaining basis elements, which happens with probability at most $(d &minus; |X|)/i$. Based on this calculation, it can be shown that overall the expected number of violation tests performed by the algorithm is $O(d! n)$, linear in $n$ but worse than exponential in $d$.

Clarkson
defines two algorithms, a recursive algorithm and an iterative algorithm, for linear programming based on random sampling techniques, and suggests a combination of the two that calls the iterative algorithm from the recursive algorithm. The recursive algorithm repeatedly chooses random samples whose size is approximately the square root of the input size, solves the sampled problem recursively, and then uses violation tests to find a subset of the remaining elements that must include at least one basis element:

function recursive(S, f) is X := empty set repeat R := a random subset of S with size d√n B := basis for R ∪ X, computed recursively V := {x | f(B) ≠ f(B ∪ {x})} X := X ∪ V until V is empty return B

In each iteration, the expected size of $V$ is $O(\sqrt{n})$, and whenever $V$ is nonempty it includes at least one new element of the eventual basis of $V$. Therefore, the algorithm performs at most $S$ iterations, each of which performs $d$ violation tests and makes a single recursive call to a subproblem of size $O(d\sqrt{n})$.

Clarkson's iterative algorithm assigns weights to each element of $n$, initially all of them equal. It then chooses a set $S$ of $9d^{2}$ elements from $R$ at random, and computes the sets $S$ and $B$ as in the previous algorithm. If the total weight of $V$ is at most $2/(9d &minus; 1)$ times the total weight of $V$ (as happens with constant probability) then the algorithm doubles the weights of every element of $S$, and as before it repeats this process until $V$ becomes empty. In each iteration, the weight of the optimal basis can be shown to increase at a greater rate than the total weight of $V$, from which it follows that the algorithm must terminate within $O(log n)$ iterations.

By using the recursive algorithm to solve a given problem, switching to the iterative algorithm for its recursive calls, and then switching again to Seidel's algorithm for the calls made by the iterative algorithm, it is possible solve a given LP-type problem using $O(dn + d! d^{O(1)} log n)$ violation tests.

When applied to a linear program, this algorithm can be interpreted as being a dual simplex method. With certain additional computational primitives beyond the violation test and basis computation primitives, this method can be made deterministic.

Matoušek, Sharir, and Welzl
describe an algorithm that uses an additional property of linear programs that is not always held by other LP-type problems, that all bases have the same cardinality of each other. If an LP-type problem does not have this property, it can be made to have it by adding $d$ new dummy elements and by modifying the function $S$ to return the ordered pair of its old value $f(A)$ and of the number $min(d,|A|)$, ordered lexicographically.

Rather than adding elements of S one at a time, or finding samples of the elements, describe an algorithm that removes elements one at a time. At each step it maintains a basis $f$ that may initially be the set of dummy elements. It may be described with the following pseudocode:

function msw(S, f, C) is if S = C then return C choose a random element x of S \ C B = msw(S \ x, f, C) if f(B) ≠ f(B ∪ {x}) then B := basis(B ∪ {x}) B := msw(S, f, B) return B

In most of the recursive calls of the algorithm, the violation test succeeds and the if statement is skipped. However, with a small probability the violation test fails and the algorithm makes an additional basis computation and then an additional recursive call. As the authors show, the expected time for the algorithm is linear in n and exponential in the square root of $d log n$. By combining this method with Clarkson's recursive and iterative procedures, these two forms of time dependence can be separated out from each other, resulting in an algorithm that performs O(dn) violation tests in the outer recursive algorithm and a number that is exponential in the square root of $d log d$ in the lower levels of the algorithm.

Optimization with outliers
considers a variation of LP-type optimization problems in which one is given, together with the set $C$ and the objective function $S$, a number $f$; the task is to remove $k$ elements from $k$ in order to make the objective function on the remaining set as small as possible. For instance, when applied to the smallest circle problem, this would give the smallest circle that contains all but $S$ of a given set of planar points. He shows that, for all non-degenerate LP-type problems (that is, problems in which all bases have distinct values) this problem may be solved in time $O(nk^{d})$, by solving a set of $O(k^{d})$ LP-type problems defined by subsets of $k$.

Implicit problems
Some geometric optimization problems may be expressed as LP-type problems in which the number of elements in the LP-type formulation is significantly greater than the number of input data values for the optimization problem. As an example, consider a collection of $S$ points in the plane, each moving with constant velocity. At any point in time, the diameter of this system is the maximum distance between two of its points. The problem of finding a time at which the diameter is minimized can be formulated as minimizing the pointwise maximum of $O(n^{2})$ quasiconvex functions, one for each pair of points, measuring the Euclidean distance between the pair as a function of time. Thus, it can be solved as an LP-type problem of combinatorial dimension two on a set of $O(n^{2})$ elements, but this set is significantly larger than the number of input points.

describes an algorithm for solving implicitly defined LP-type problems such as this one in which each LP-type element is determined by a $n$-tuple of input values, for some constant $k$. In order to apply his approach, there must exist a decision algorithm that can determine, for a given LP-type basis $k$ and set $B$ of $S$ input values, whether $n$ is a basis for the LP-type problem determined by $B$.

Chan's algorithm performs the following steps:
 * If the number of input values is below some threshold value, find the set of LP-type elements that it determines and solve the resulting explicit LP-type problem.
 * Otherwise, partition the input values into a suitable number greater than $S$ of equal-sized subsets $O(n log n)$.
 * If $k$ is the objective function for the implicitly defined LP-type problem to be solved, then define a function $f$ that maps collections of subsets $S_{i}$ to the value of $g$ on the union of the collection. Then the collection of subsets $S_{i}$ and the objective function $f$ itself defines an LP-type problem, of the same dimension as the implicit problem to be solved.
 * Solve the (explicit) LP-type problem defined by $g$ using Clarkson's algorithm, which performs a linear number of violation tests and a polylogarithmic number of basis evaluations. The basis evaluations for $g$ may be performed by recursive calls to Chan's algorithm, and the violation tests may be performed by calls to the decision algorithm.

With the assumption that the decision algorithm takes an amount of time $S_{i}$ that grows at least polynomially as a function of the input size $g$, Chan shows that the threshold for switching to an explicit LP formulation and the number of subsets in the partition can be chosen in such a way that the implicit LP-type optimization algorithm also runs in time $O(T(n))$.

For instance, for the minimum diameter of moving points, the decision algorithm needs only to calculate the diameter of a set of points at a fixed time, a problem that can be solved in $O(T(n))$ time using the rotating calipers technique. Therefore, Chan's algorithm for finding the time at which the diameter is minimized also takes time $O(n log n)$. Chan uses this method to find a point of maximal Tukey depth among a given collection of $n$ points in $n$-dimensional Euclidean space, in time $O(n log n)$. A similar technique was used by to find a point of maximal Tukey depth for the uniform distribution on a convex polygon.

History and related problems
The discovery of linear time algorithms for linear programming and the observation that the same algorithms could in many cases be used to solve geometric optimization problems that were not linear programs goes back at least to, who gave a linear expected time algorithm for both three-variable linear programs and the smallest circle problem. However, Megiddo formulated the generalization of linear programming geometrically rather than combinatorially, as a convex optimization problem rather than as an abstract problem on systems of sets. Similarly, and Clarkson (in the 1988 conference version of ) observed that their methods could be applied to convex programs as well as linear programs. showed that the minimum enclosing ellipsoid problem could also be formulated as a convex optimization problem by adding a small number of non-linear constraints. The use of randomization to improve the time bounds for low dimensional linear programming and related problems was pioneered by Clarkson and by.

The definition of LP-type problems in terms of functions satisfying the axioms of locality and monotonicity is from, but other authors in the same timeframe formulated alternative combinatorial generalizations of linear programs. For instance, in a framework developed by, the function $d$ is replaced by a total ordering on the subsets of $f$. It is possible to break the ties in an LP-type problem to create a total order, but only at the expense of an increase in the combinatorial dimension. Additionally, as in LP-type problems, Gärtner defines certain primitives for performing computations on subsets of elements; however, his formalization does not have an analogue of the combinatorial dimension.

Another abstract generalization of both linear programs and linear complementarity problems, formulated by and later studied by several other authors, concerns orientations of the edges of a hypercube with the property that every face of the hypercube (including the whole hypercube as a face) has a unique sink, a vertex with no outgoing edges. An orientation of this type may be formed from an LP-type problem by corresponding the subsets of S with the vertices of a hypercube in such a way that two subsets differ by a single element if and only if the corresponding vertices are adjacent, and by orienting the edge between neighboring sets $O(n^{d &minus; 1} + n log n)$ towards $S$ if $A ⊆ B$ and towards $B$ otherwise. The resulting orientation has the additional property that it forms a directed acyclic graph, from which it can be shown that a randomized algorithm can find the unique sink of the whole hypercube (the optimal basis of the LP-type problem) in a number of steps exponential in the square root of $A$.

The more recently developed framework of violator spaces generalizes LP-type problems, in the sense that every LP-type problem can be modeled by a violator space but not necessarily vice versa. Violator spaces are defined similarly to LP-type problems, by a function $n$ that maps sets to objective function values, but the values of $f$ are not ordered. Despite the lack of ordering, every set $f$ has a well-defined set of bases (the minimal sets with the same value as the whole set) that can be found by variations of Clarkson's algorithms for LP-type problems. Indeed, violator spaces have been shown to exactly characterize the systems that can be solved by Clarkson's algorithms.