Set cover problem

The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.

Given a set of elements ${1, 2, …, n}$ (called the universe) and a collection $S$ of $m$ subsets whose union equals the universe, the set cover problem is to identify the smallest sub-collection of $S$ whose union equals the universe. For example, consider the universe $U = {1, 2, 3, 4, 5}$ and the collection of sets $S = { {1, 2, 3}, {2, 4}, {3, 4}, {4, 5} }.$ Clearly the union of $S$ is $U$. However, we can cover all elements with only two sets: ${ {1, 2, 3}, {4, 5} },$ see picture. Therefore, the solution to the set cover problem has size 2.

More formally, given a universe $$\mathcal{U}$$ and a family $$\mathcal{S}$$ of subsets of $$\mathcal{U}$$, a set cover is a subfamily $$\mathcal{C}\subseteq\mathcal{S}$$ of sets whose union is $$\mathcal{U}$$.


 * In the set cover decision problem, the input is a pair $$(\mathcal{U},\mathcal{S})$$ and an integer $$k$$; the question is whether there is a set cover of size $$k$$ or less.
 * In the set cover optimization problem, the input is a pair $$(\mathcal{U},\mathcal{S})$$, and the task is to find a set cover that uses the fewest sets.

The decision version of set covering is NP-complete. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972. The optimization/search version of set cover is NP-hard. It is a problem "whose study has led to the development of fundamental techniques for the entire field" of approximation algorithms.

Variants
In the weighted set cover problem, each set is assigned a positive weight (representing its cost), and the goal is to find a set cover with a smallest weight. The usual (unweighted) set cover corresponds to all sets having a weight of 1.

In the fractional set cover problem, it is allowed to select fractions of sets, rather than entire sets. A fractional set cover is an assignment of a fraction (a number in [0,1]) to each set in $$\mathcal{S}$$, such that for each element x in the universe, the sum of fractions of sets that contain x is at least 1. The goal is to find a fractional set cover in which the sum of fractions is as small as possible. Note that a (usual) set cover is equivalent to a fractional set cover in which all fractions are either 0 or 1; therefore, the size of the smallest fractional cover is at most the size of the smallest cover, but may be smaller. For example, consider the universe $U = {1, 2, 3 }$ and the collection of sets $S = { {1, 2}, {2, 3}, {3, 1} }.$ The smallest set cover has a size of 2, e.g. ${ {1, 2}, {2, 3} }.$ But there is a fractional set cover of size 1.5, in which a 0.5 fraction of each set is taken.

Linear program formulation
The set cover problem can be formulated as the following integer linear program (ILP). For a more compact representation of the covering constraint, one can define an incidence matrix $$A$$, where each row corresponds to an element and each column corresponds to a set, and $$A_{e,s}=1$$ if element e is in set s, and $$A_{e,s}=0$$ otherwise. Then, the covering constraint can be written as $$A x \geqslant 1 $$.

Weighted set cover is described by a program identical to the one given above, except that the objective function to minimize is $$\sum_{s \in \mathcal S} w_s x_s$$, where $$w_{s}$$ is the weight of set $$s\in \mathcal{S}$$.

Fractional set cover is described by a program identical to the one given above, except that $$x_s$$ can be non-integer, so the last constraint is replaced by $$0 \leq x_s\leq 1$$.

This linear program belongs to the more general class of LPs for covering problems, as all the coefficients in the objective function and both sides of the constraints are non-negative. The integrality gap of the ILP is at most $$\scriptstyle \log n$$ (where $$\scriptstyle n$$ is the size of the universe). It has been shown that its relaxation indeed gives a factor-$$\scriptstyle \log n$$ approximation algorithm for the minimum set cover problem. See randomized rounding for a detailed explanation.

Hitting set formulation
Set covering is equivalent to the hitting set problem. That is seen by observing that an instance of set covering can be viewed as an arbitrary bipartite graph, with the universe represented by vertices on the left, the sets represented by vertices on the right, and edges representing the membership of elements to sets. The task is then to find a minimum cardinality subset of left-vertices that has a non-trivial intersection with each of the right-vertices, which is precisely the hitting set problem.

In the field of computational geometry, a hitting set for a collection of geometrical objects is also called a stabbing set or piercing set.

Greedy algorithm
There is a greedy algorithm for polynomial time approximation of set covering that chooses sets according to one rule: at each stage, choose the set that contains the largest number of uncovered elements. This method can be implemented in time linear in the sum of sizes of the input sets, using a bucket queue to prioritize the sets. It achieves an approximation ratio of $$H(s)$$, where $$s$$ is the size of the set to be covered. In other words, it finds a covering that may be $$H(n)$$ times as large as the minimum one, where $$H(n)$$ is the $$n$$-th harmonic number: $$ H(n) = \sum_{k=1}^{n} \frac{1}{k} \le \ln{n} +1$$

This greedy algorithm actually achieves an approximation ratio of $$H(s^\prime)$$ where $$s^\prime$$ is the maximum cardinality set of $$S$$. For $$\delta-$$dense instances, however, there exists a $$c \ln{m}$$-approximation algorithm for every $$c > 0$$.



There is a standard example on which the greedy algorithm achieves an approximation ratio of $$\log_2(n)/2$$. The universe consists of $$n=2^{(k+1)}-2$$ elements. The set system consists of $$k$$ pairwise disjoint sets $$S_1,\ldots,S_k$$ with sizes $$2,4,8,\ldots,2^k$$ respectively, as well as two additional disjoint sets $$T_0,T_1$$, each of which contains half of the elements from each $$S_i$$. On this input, the greedy algorithm takes the sets $$S_k,\ldots,S_1$$, in that order, while the optimal solution consists only of $$T_0$$ and $$T_1$$. An example of such an input for $$k=3$$ is pictured on the right.

Inapproximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for set cover up to lower order terms (see Inapproximability results below), under plausible complexity assumptions. A tighter analysis for the greedy algorithm shows that the approximation ratio is exactly $$\ln{n} - \ln{\ln{n}} + \Theta(1)$$.

Low-frequency systems
If each element occurs in at most f sets, then a solution can be found in polynomial time that approximates the optimum to within a factor of f using LP relaxation.

If the constraint $$x_S\in\{0,1\}$$ is replaced by $$x_S \geq 0$$ for all S in $$\mathcal{S}$$ in the integer linear program shown above, then it becomes a (non-integer) linear program L. The algorithm can be described as follows:
 * 1) Find an optimal solution O for the program L using some polynomial-time method of solving linear programs.
 * 2) Pick all sets S for which the corresponding variable xS has value at least 1/f in the solution O.

Inapproximability results
When $$ n$$ refers to the size of the universe, showed that set covering cannot be approximated in polynomial time to within a factor of $$\tfrac{1}{2}\log_2{n} \approx 0.72\ln{n}$$, unless NP has quasi-polynomial time algorithms. Feige (1998) improved this lower bound to $$\bigl(1-o(1)\bigr)\cdot\ln{n}$$ under the same assumptions, which essentially matches the approximation ratio achieved by the greedy algorithm. established a lower bound of $$c\cdot\ln{n}$$, where $$c$$ is a certain constant, under the weaker assumption that P$$\not=$$NP. A similar result with a higher value of $$c$$ was recently proved by. showed optimal inapproximability by proving that it cannot be approximated to $$\bigl(1 - o(1)\bigr) \cdot \ln{n}$$ unless P$$=$$NP.

In low-frequency systems, proved it is NP-hard to approximate set cover to better than $$f-1-\epsilon$$. If the Unique games conjecture is true, this can be improved to $$f-\epsilon$$ as proven by.

proves that set cover instances with sets of size at most $$\Delta$$ cannot be approximated to a factor better than $$\ln \Delta - O(\ln \ln \Delta)$$ unless P$$=$$NP, thus making the approximation of $$\ln \Delta + 1$$ of the greedy algorithm essentially tight in this case.

Weighted set cover
Relaxing the integer linear program for weighted set cover stated above, one may use randomized rounding to get an $$O(\log n)$$-factor approximation. Non weighted set cover can be adapted to the weighted case.

Related problems

 * Hitting set is an equivalent reformulation of Set Cover.
 * Vertex cover is a special case of Hitting Set.
 * Edge cover is a special case of Set Cover.
 * Geometric set cover is a special case of Set Cover when the universe is a set of points in $$\mathbb{R}^d$$ and the sets are induced by the intersection of the universe and geometric shapes (e.g., disks, rectangles).
 * Set packing
 * Maximum coverage problem is to choose at most k sets to cover as many elements as possible.
 * Dominating set is the problem of selecting a set of vertices (the dominating set) in a graph such that all other vertices are adjacent to at least one vertex in the dominating set. The Dominating set problem was shown to be NP complete through a reduction from Set cover.
 * Exact cover problem is to choose a set cover with no element included in more than one covering set.
 * Red-blue set cover.
 * Set-cover abduction.
 * Monotone dualization is a computational problem equivalent to either listing all minimal hitting sets or listing all minimal set covers of a given set family.