Monotone dualization

In theoretical computer science, monotone dualization is a computational problem of constructing the dual of a monotone Boolean function. Equivalent problems can also be formulated as constructing the transversal hypergraph of a given hypergraph, of listing all minimal hitting sets of a family of sets, or of listing all minimal set covers of a family of sets. These problems can be solved in quasi-polynomial time in the combined size of its input and output, but whether they can be solved in polynomial time is an open problem.

Definitions
A Boolean function takes as input an assignment of truth values to its arguments, and produces as output another truth value. It is monotone when changing an argument from false to true cannot change the output from true to false. Every monotone Boolean function can be expressed as a Boolean expression using only logical disjunction ("or") and logical conjunction ("and"), without using logical negation ("not"). Such an expression is called a monotone Boolean expression. Every monotone Boolean expression describes a monotone Boolean function.

There may be many different expressions for the same function. Among them are two special expressions, the conjunctive normal form and disjunctive normal form. For monotone functions these two special forms can also be restricted to be monotone:
 * The conjunctive normal form of a monotone function expresses the function as a conjunction ("and") of clauses, each of which is a disjunction ("or") of some of the variables. A clause may appear in the conjunctive normal form if it is true whenever the overall function is true; in this case it is called an implicate, because the truth of the function implies the truth of the clause. This expression may be made canonical by restricting it to use only prime implicates, the implicates that use a minimal set of variables. The conjunctive normal form using only prime implicates is called the prime CNF.
 * The disjunctive normal form of a monotone function expresses the function as a disjunction ("or") of clauses, each of which is a conjunction ("and") of variables. A conjunction may appear in the disjunctive normal form if it is false whenever the overall function is false; in this case, it is called an implicant, because its truth implies the truth of the function. This expression may be made canonical by restricting it to use only prime implicants, the implicants that use a minimal set of variables. The disjunctive normal form using only prime implicants is called the prime DNF.

The dual of a Boolean function is obtained by negating all of its variables, applying the function, and then negating the result. The dual of the dual of any Boolean function is the original function. The dual of a monotone function is monotone. If one is given a monotone Boolean expression, then replacing all conjunctions by disjunctions produces another monotone Boolean expression for the dual function, following De Morgan's laws. However, this will transform the conjunctive normal form into disjunctive normal form, and vice versa, which may be undesired. Monotone dualization is the problem of finding an expression for the dual function without changing the form of the expression, or equivalently of converting a function in one normal form into the dual form.

As a functional problem, monotone dualization can be expressed in the following equivalent ways:
 * Given a (prime) CNF expression, construct a (prime) CNF expression for the dual function.
 * Convert the (prime) CNF expression of a function into the (prime) DNF expression for the same function, or vice versa
 * Construct the transversal hypergraph of a given hypergraph. This is a hypergraph on the same vertex set that has a hyperedge for every minimal subset of vertices that touches all edges of the given hypergraph.
 * Given a family of sets, generate all minimal hitting sets of the family. These are sets of elements that include at least one element from each set, and have no proper subset with the same property. If the sets in the given family are interpreted as hyperedges in a hypergraph, their minimal hitting sets are the hyperedges of the transversal hypergraph.
 * Given a family of sets, generate all minimal set covers of the family. A set cover is a subfamily with the same union as the whole family. If the sets in the given family are interpreted as vertices in a hypergraph, with each element of the sets interpreted as a hyperedge incident to the sets containing that element, then the minimal set covers are the hyperedges of the transversal hypergraph.

Another version of the problem can be formulated as a problem of "exact learning" in computational learning theory: given access to a subroutine for evaluating a monotone Boolean function, reconstruct both the CNF and DNF representations of the function, using a small number of function evaluations. However, it is crucial in analyzing the complexity of this problem that both the CNF and DNF representations are output. If only the CNF representation of an unknown monotone function is output, it follows from information theory that the number of function evaluations must sometimes be exponential in the combined input and output sizes. This is because (to be sure of getting the correct answer) the algorithm must evaluate the function at least once for each prime implicate and at least once for each prime implicant, but this number of evaluations can be exponentially larger than the number of prime implicates alone.

It is also possible to express a variant of the monotone dualization problem as a decision problem, with a Boolean answer:
 * Test whether two prime CNF expressions represent dual functions
 * Test whether a prime CNF expression and a prime DNF expression represent the same function.

It is an open problem whether monotone dualization has a polynomial time algorithm (in any of these equivalent forms). The fastest algorithms known run in quasi-polynomial time. The size of the output of the dualization and exact learning problems can be exponentially large, as a function of the number of variables or the input size. For instance, an $$n$$-vertex graph consisting of $$n/3$$ disjoint triangles has $$3^{n/3}$$ hyperedges in its transversal hypergraph. Therefore, what is desired for these problems is an output-sensitive algorithm, one that takes a small amount of time per output clause. The decision, dualization, and exact learning formulations of the problem are all computationally equivalent, in the following sense: any one of them can be solved using a subroutine for any other of these problems, with a number of subroutine calls that is polynomial in the combined input and output sizes of the problems. Therefore, if any one of these problems could be solved in polynomial time, they all could. However, the best time bound that is known for these problems is quasi-polynomial time. It remains an open problem whether they can be solved in polynomial time.

Equivalence of decision, enumeration, and exact learning
The problem of finding the prime CNF expression for the dual function of a monotone function, given as a CNF formula, can be solved by finding the DNF expression for the given function and then dualizing it. Therefore, finding the dual CNF expression, and finding the DNF expression for the (primal) given function, have the same complexity. This problem can also be seen as a special case of the exact learning formulation of the problem. From a given CNF expression, it is straightforward to evaluate the function that it expresses. An exact learning algorithm will return both the starting CNF expression and the desired DNF expression. Therefore, dualization can be no harder than exact learning.

It is also straightforward to solve the decision problem given an algorithm for dualization: dualize the given CNF expression and then test whether it is equal to the given DNF expression. Therefore, research in this area has focused on the other direction of this equivalence: solving the exact learning problem (or the dualization problem) given a subroutine for the decision problem.

outline the following algorithm for solving exact learning using a decision subroutine: Each iteration through the outer loop of the algorithm uses a linear number of calls to the decision problem to find the unforced truth assignment, uses a linear number of function evaluations to find a minimal true or maximal false function value, and adds one clause to the output. Therefore, the total number of calls to the decision problem and the total number of function evaluations is a polynomial of the total output size.
 * Initialize sets of the prime CNF and prime DNF clauses that have been identified so far, initially empty.
 * Repeat the following steps:
 * Use the decision problem to test whether the current sets of prime CNF and prime DNF clauses are dual, and if so terminate the algorithm, returning the clauses that have been found.
 * Construct a truth assignment to the variables for which the function value is neither forced to be true by the known prime DNF clauses, nor forced to be false by the known prime CNF clauses. This construction may be performed by choosing values for the variables one at a time, at each step using the decision problem to preserve the property that the CNF and DNF clauses are non-dual when restricted to the chosen truth assignment.
 * Evaluate the function at this truth assignment. If it is true, then try changing variables one at a time from true to false to find a minimal truth assignment for which the function still evaluates as true. This minimal truth assignment corresponds to a prime DNF clause, not already known; add this to the set of known clauses.
 * Symmetrically, if the function evaluates to false, then try changing variables one at a time from false to true to find a maximal truth assignment for which the function still evaluates as false. This maximal truth assignment corresponds to a prime CNF clause, not already known; add this to the set of known clauses.

Quasi-polynomial time
A central result in the study of this problem, by Michael Fredman and Leonid Khachiyan, is that monotone dualization (in any of its equivalent forms) can be solved in quasi-polynomial time. Their algorithms directly solve the decision problem, but can be converted to the other forms of the monotone dualization problem as described in. Alternatively, in cases where the answer to the decision problem is no, the algorithms can be modified to return a witness, that is, a truth assignment for which the input formulas fail to determine the function value. Its main idea is to first "clean" the decision problem instance, by removing redundant information and directly solving certain easy-to-solve cases of the problem. Then, in remaining cases it branches on a carefully chosen variable. This means recursively calling the same algorithm on two smaller subproblems, one for a restricted monotone function for which the variable has been set to true and the other in which the variable has been set to false. The cleaning step ensures the existence of a variable that belongs to many clauses, causing a significant reduction in the recursive subproblem size.

In more detail, the first and slower of the two algorithms of Fredman and Khachiyan performs the following steps: When this algorithm branches on a variable occurring in many clauses, these clauses are eliminated from one of the two recursive calls. Using this fact, the running time of the algorithm can be bounded by an exponential function of $$(\log n)^3$$.
 * Remove any clause that is not minimal among the given set of clauses. (That is, the removed clause uses a set of variables that is a superset of the variables in another clause of the same type.)
 * If the two sets of clauses (CNF and DNF in one version of the decision problem, or sets of CNF clauses that are supposed to represent two dual functions in another version) do not cover the same sets of variables, return that they are not dual.
 * If two clauses from different sets of clauses use disjoint sets of variables, return that they are not dual. In this case, the clauses imply contradictory function values for any truth assignment that is consistent with both of them.
 * If any clause in one class uses a number of variables that is larger than the number of clauses in the other class, return that they are not dual. If this clauses is to be minimal, it cannot be the case that removing any one variable from it produces a valid clause for the same function, but there are not enough clauses from the other class to block each of these removals.
 * For each clause, count the number of truth assignments whose function value is determined by the clause. This number is $$2^{n-k}$$, for a clause with $$k$$ variables in a problem with $$n$$ variables. If the sum of these numbers, added over all clauses of both types, is fewer than the $$2^n$$ truth assignments that exist in total, then return that the two sets of clauses are not dual: at least one truth assignment must have a value that they do not determine.
 * If either set of clauses is empty, or both consist of only one clause, handle the problem as a special case in constant time.
 * In the remaining cases there exists a variable which occurs in a large fraction of one of the two sets of clauses. Branch on that variable. More precisely, if there are $$m$$ total clauses, then (to cover all of the truth assignments) at least one of the clauses must have at most $$\log_2 m$$ variables. Each clause in the other set of clauses must have a non-empty intersection with this short clause, so one of the variables in the short clause occurs in at least a $$1/\log_2 m$$ fraction of the other set of clauses.

A second algorithm of Fredman and Khachiyan has a similar overall structure, but in the case where the branch variable occurs in many clauses of one set and few of the other, it chooses the first of the two recursive calls to be the one where setting the branch variable significantly reduces the number of clauses. If that recursive call fails to find an inconsistency, then, instead of performing a single recursive call for the other branch, it performs one call for each clause that contains the branch variable, on a restricted subproblem in which all the other variables of that clause have been assigned in the same way. Its running time is an exponential function of $$(\log n)^2$$.

Polynomial special cases
Many special cases of the monotone dualization problem have been shown to be solvable in polynomial time through the analysis of their parameterized complexity. These include:
 * Dualization of CNF or DNF formulas in which each variable appears in a bounded number of clauses, or exact learning of monotone functions that have formulas of this type.
 * Constructing transversal hypergraphs of uniformly sparse hypergraphs, in which every induced sub-hypergraph has bounded average degree, and of hypergraphs for which generalizations of the graph-theoretic concepts of treewidth or degeneracy are bounded.
 * Constructing transversal hypergraphs for which the complement (the hypergraph obtained by complementing each hyperedge) has low degree.

Applications
One application of monotone dualization involves group testing for fault detection and isolation in the model-based diagnosis of complex systems. From a collection of observations of faulty behavior of a system, each with some set of active components, one can surmise that the faulty components causing this misbehavior are likely to form a minimal hitting set of this family of sets.

In biochemical engineering, the enumeration of hitting sets has been used to identify subsets of metabolic reactions whose removal from a system adjusts the balance of the system in a desired direction. Analogous methods have also been applied to other biological interaction networks, for instance in the design of microarray experiments that can be used to infer protein interactions in biological systems.

In recreational mathematics, in the design of sudoku puzzles, the problem of designing a system of clues that has a given grid of numbers as its unique solution can be formulated as a minimal hitting set problem. The 81 candidate clues from the given grid are the elements to be selected in the hitting set, and the sets to be hit are the sets of candidate clues that can eliminate each alternative solution. Thus, the enumeration of minimal hitting sets can be used to find all systems of clues that have a given solution. This approach has been as part of a computational proof that it is not possible to design a valid sudoku puzzle with only 16 clues.