Transitive closure

In mathematics, the transitive closure $R+$ of a homogeneous binary relation $R$ on a set $X$ is the smallest relation on $X$ that contains $R$ and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets $R+$ is the unique minimal transitive superset of $R$.

For example, if $X$ is a set of airports and $x R y$ means "there is a direct flight from airport $x$ to airport $y$" (for $x$ and $y$ in $X$), then the transitive closure of $R$ on $X$ is the relation $R+$ such that $x R+ y$ means "it is possible to fly from $x$ to $y$ in one or more flights".

More formally, the transitive closure of a binary relation $R$ on a set $X$ is the smallest (w.r.t. ⊆) transitive relation $R+$ on $X$ such that $R$ ⊆ $R+$; see. We have $R+$ = $R$ if, and only if, $R$ itself is transitive.

Conversely, transitive reduction adduces a minimal relation $S$ from a given relation $R$ such that they have the same closure, that is, $S+ = R+$; however, many different $S$ with this property may exist.

Both transitive closure and transitive reduction are also used in the closely related area of graph theory.

Transitive relations and examples
A relation R on a set X is transitive if, for all x, y, z in X, whenever x R y and y R z then x R z. Examples of transitive relations include the equality relation on any set, the "less than or equal" relation on any linearly ordered set, and the relation "x was born before y" on the set of all people. Symbolically, this can be denoted as: if x < y and y < z then x < z.

One example of a non-transitive relation is "city x can be reached via a direct flight from city y" on the set of all cities. Simply because there is a direct flight from one city to a second city, and a direct flight from the second city to the third, does not imply there is a direct flight from the first city to the third. The transitive closure of this relation is a different relation, namely "there is a sequence of direct flights that begins at city x and ends at city y". Every relation can be extended in a similar way to a transitive relation.

An example of a non-transitive relation with a less meaningful transitive closure is "x is the day of the week after y". The transitive closure of this relation is "some day x comes after a day y on the calendar", which is trivially true for all days of the week x and y (and thus equivalent to the Cartesian square, which is "x and y are both days of the week").

Existence and description
For any relation R, the transitive closure of R always exists. To see this, note that the intersection of any family of transitive relations is again transitive. Furthermore, there exists at least one transitive relation containing R, namely the trivial one: X × X. The transitive closure of R is then given by the intersection of all transitive relations containing R.

For finite sets, we can construct the transitive closure step by step, starting from R and adding transitive edges. This gives the intuition for a general construction. For any set X, we can prove that transitive closure is given by the following expression
 * $$R^{+}=\bigcup_{i = 1}^{\infty} R^i.$$

where $$R^i$$ is the i-th power of R, defined inductively by
 * $$R^1 = R$$

and, for $$i>0$$,
 * $$R^{i+1} = R \circ R^i$$

where $$\circ$$ denotes composition of relations.

To show that the above definition of R+ is the least transitive relation containing R, we show that it contains R, that it is transitive, and that it is the smallest set with both of those characteristics.


 * $$R \subseteq R^{+}$$: $$ R^+$$ contains all of the $$ R^i$$, so in particular $$ R^+$$ contains $$ R$$.
 * $$ R^{+}$$ is transitive: If $$(s_1, s_2), (s_2, s_3)\in R^+$$, then $$(s_1, s_2)\in R^j$$ and $$(s_2, s_3)\in R^k$$ for some $$j,k$$ by definition of $$R^+$$. Since composition is associative, $$R^{j+k} = R^j \circ R^k$$; hence $$(s_1, s_3)\in R^{j+k} \subseteq R^+$$ by definition of $$\circ$$ and $$R^+$$.
 * $$R^{+}$$ is minimal, that is, if $$T$$ is any transitive relation containing $$R$$, then $$R^{+} \subseteq T$$: Given any such $$T$$, induction on $$i$$ can be used to show $$R^i\subseteq T$$ for all $$i$$ as follows: Base: $$R^1 = R \subseteq T$$ by assumption. Step: If $$R^i\subseteq T$$ holds, and $$(s_1, s_3)\in R^{i+1} = R \circ R^i$$, then $$(s_1, s_2) \in R$$ and $$(s_2, s_3)\in R^i$$ for some $$s_2$$, by definition of $$\circ$$. Hence, $$(s_1, s_2), (s_2, s_3)\in T$$ by assumption and by induction hypothesis. Hence $$(s_1, s_3)\in T$$ by transitivity of $$T$$; this completes the induction. Finally, $$R^i\subseteq T$$ for all $$i$$ implies $$R^{+} \subseteq T$$ by definition of $$R^{+}$$.

Properties
The intersection of two transitive relations is transitive.

The union of two transitive relations need not be transitive. To preserve transitivity, one must take the transitive closure. This occurs, for example, when taking the union of two equivalence relations or two preorders. To obtain a new equivalence relation or preorder one must take the transitive closure (reflexivity and symmetry—in the case of equivalence relations—are automatic).

In graph theory
In computer science, the concept of transitive closure can be thought of as constructing a data structure that makes it possible to answer reachability questions. That is, can one get from node a to node d in one or more hops? A binary relation tells you only that node a is connected to node b, and that node b is connected to node c, etc. After the transitive closure is constructed, as depicted in the following figure, in an O(1) operation one may determine that node d is reachable from node a. The data structure is typically stored as a Boolean matrix, so if matrix[1][4] = true, then it is the case that node 1 can reach node 4 through one or more hops.

The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order.

The transitive closure of an undirected graph produces a cluster graph, a disjoint union of cliques. Constructing the transitive closure is an equivalent formulation of the problem of finding the components of the graph.

In logic and computational complexity
The transitive closure of a binary relation cannot, in general, be expressed in first-order logic (FO). This means that one cannot write a formula using predicate symbols R and T that will be satisfied in any model if and only if T is the transitive closure of R. In finite model theory, first-order logic (FO) extended with a transitive closure operator is usually called transitive closure logic, and abbreviated FO(TC) or just TC. TC is a sub-type of fixpoint logics. The fact that FO(TC) is strictly more expressive than FO was discovered by Ronald Fagin in 1974; the result was then rediscovered by Alfred Aho and Jeffrey Ullman in 1979, who proposed to use fixpoint logic as a database query language. With more recent concepts of finite model theory, proof that FO(TC) is strictly more expressive than FO follows immediately from the fact that FO(TC) is not Gaifman-local.

In computational complexity theory, the complexity class NL corresponds precisely to the set of logical sentences expressible in TC. This is because the transitive closure property has a close relationship with the NL-complete problem STCON for finding directed paths in a graph. Similarly, the class L is first-order logic with the commutative, transitive closure. When transitive closure is added to second-order logic instead, we obtain PSPACE.

In database query languages
Since the 1980s Oracle Database has implemented a proprietary SQL extension  that allows the computation of a transitive closure as part of a declarative query. The SQL 3 (1999) standard added a more general  construct also allowing transitive closures to be computed inside the query processor; as of 2011 the latter is implemented in IBM Db2, Microsoft SQL Server, Oracle, PostgreSQL, and MySQL (v8.0+). SQLite released support for this in 2014.

Datalog also implements transitive closure computations.

MariaDB implements Recursive Common Table Expressions, which can be used to compute transitive closures. This feature was introduced in release 10.2.2 of April 2016.

Algorithms
Efficient algorithms for computing the transitive closure of the adjacency relation of a graph can be found in. Reducing the problem to multiplications of adjacency matrices achieves the time complexity of matrix multiplication, $$O(n^{2.3728596})$$. However, this approach is not practical since both the constant factors and the memory consumption for sparse graphs are high. The problem can also be solved by the Floyd–Warshall algorithm in $$O(n^3)$$, or by repeated breadth-first search or depth-first search starting from each node of the graph.

For directed graphs, Purdom's algorithm solves the problem by first computing its condensation DAG and its transitive closure, then lifting it to the original graph. Its runtime is $$O(m+\mu n)$$, where $$\mu$$ is the number of edges between its strongly connected components.

More recent research has explored efficient ways of computing transitive closure on distributed systems based on the MapReduce paradigm.