Syntax and semantics of logic programming

Logic programming is a programming paradigm that includes languages based on formal logic, including Datalog and Prolog. This article describes the syntax and semantics of the purely declarative subset of these languages. Confusingly, the name "logic programming" also refers to a programming language that roughly corresponds to the declarative subset of Prolog. Unfortunately, the term must be used in both senses in this article.

Declarative logic programs consist entirely of rules of the form

Each such rule can be read as an implication:


 * $$B_1\land\ldots\land B_n\rightarrow H$$

meaning "If each $$B_i$$ is true, then $$H$$ is true". Logic programs compute the set of facts that are implied by their rules.

Many implementations of Datalog, Prolog, and related languages add procedural features such as Prolog's cut operator or extra-logical features such as a foreign function interface. The formal semantics of such extensions are beyond the scope of this article.

Datalog
Datalog is the simplest widely-studied logic programming language. There are three major definitions of the semantics of Datalog, and they are all equivalent. The syntax and semantics of other logic programming languages are extensions and generalizations of those of Datalog.

Syntax
A Datalog program consists of a list of rules (Horn clauses). If constant and variable are two countable sets of constants and variables respectively and relation is a countable set of predicate symbols, then the following BNF grammar expresses the structure of a Datalog program:

Atoms are also referred to as. The atom to the left of the  symbol is called the  of the rule; the atoms to the right are the. Every Datalog program must satisfy the condition that every variable that appears in the head of a rule also appears in the body (this condition is sometimes called the ).

Rules with empty bodies are called. For example, the following rule is a fact:

Syntactic sugar
Many implementations of logic programming extend the above grammar to allow writing facts without the, like so:

Many also allow writing 0-ary relations without parentheses, like so:

These are merely abbreviations (syntactic sugar); they have no impact on the semantics of the program.

Example
The following program computes the relation, which is the transitive closure of the relation.

Semantics
There are three widely-used approaches to the semantics of Datalog programs: model-theoretic, fixed-point, and proof-theoretic. These three approaches can be proven to be equivalent.

An atom is called if none of its subterms are variables. Intuitively, each of the semantics define the meaning of a program to be the set of all ground atoms that can be deduced from the rules of the program, starting from the facts.

Model theoretic
Herbrand-model-hasse.png of Herbrand interpretations of the Datalog program

The interpretation $$M$$ is the minimal Herbrand model. All interpretations above it are also models, all interpretations below it are not models.

]]

A rule is called ground if all of its atoms (head and body) are ground. A ground rule R1 is a ground instance of another rule R2 if R1 is the result of a substitution of constants for all the variables in R2.

The Herbrand base of a Datalog program is the set of all ground atoms that can be made with the constants appearing in the program. An interpretation (also known as a database instance) is a subset of the Herbrand base. A ground atom is true in an interpretation $I$ if it is an element of $I$. A rule is true in an interpretation $I$ if for each ground instance of that rule, if all the clauses in the body are true in $I$, then the head of the rule is also true in $I$.

A model of a Datalog program P is an interpretation $I$ of P which contains all the ground facts of P, and makes all of the rules of P true in $I$. Model-theoretic semantics state that the meaning of a Datalog program is its minimal model (equivalently, the intersection of all its models).

For example, this program:

has this Herbrand universe:,  ,

and this Herbrand base:,  , ...,  ,  , ...,

and this minimal Herbrand model:,  ,  ,  ,

Fixed-point
Let $I$ be the set of interpretations of a Datalog program P, that is, $I = P(H)$, where H is the Herbrand base of P and P is the powerset operator. The for P is the following map $T$ from $I$ to $I$: For each ground instance of each rule in P, if every clause in the body is in the input interpretation, then add the head of the ground instance to the output interpretation. This map $T$ is monotonic with respect to the partial order given by subset inclusion on $T$. By the Knaster–Tarski theorem, this map has a least fixed point; by the Kleene fixed-point theorem the fixed point is the supremum of the chain $$T(\emptyset), T(T(\emptyset)), \ldots, T^n(\emptyset), \ldots $$. The least fixed point of $M$ coincides with the minimal Herbrand model of the program.

The fixpoint semantics suggest an algorithm for computing the minimal Herbrand model: Start with the set of ground facts in the program, then repeatedly add consequences of the rules until a fixpoint is reached. This algorithm is called naïve evaluation.

Proof-theoretic
[[Image:Proof tree for Datalog transitive closure computation.svg|thumb|Proof tree showing the derivation of the ground atom  from the program

]]

Given a program $P$, a of a ground atom $A$ is a tree with a root labeled by $A$, leaves labeled by ground atoms from the heads of facts in $P$, and branches with children $$A_1, \ldots, A_n$$ labeled by ground atoms $G$ such that there exists a ground instance



of a rule in $P$. The proof-theoretic semantics defines the meaning of a Datalog program to be the set of ground atoms that can be defived from such trees. This set coincides with the minimal Herbrand model.

One might be interested in knowing whether or not a particular ground atom appears in the minimal Herbrand model of a Datalog program, perhaps without caring much about the rest of the model. A top-down reading of the proof trees described above suggests an algorithm for computing the results of such queries, such a reading informs the SLD resolution algorithm, which forms the basis for the evaluation of Prolog.

Other approaches
The semantics of Datalog have also been studied in the context of fixpoints over more general semirings.

Logic programming
While the name "logic programming" is used to refer to the entire paradigm of programming languages including Datalog and Prolog, when discussing formal semantics, it generally refers to an extension of Datalog with function symbols. Logic programs are also called. Logic programming as discussed in this article is closely related to the "pure" or declarative subset of Prolog.

Syntax
The syntax of logic programming extends the syntax of Datalog with function symbols. Logic programming drops the range restriction, allowing variables to appear in the heads of rules that do not appear in their bodies.

Semantics
Due to the presence of function symbols, the Herbrand models of logic programs can be infinite. However, the semantics of a logic program is still defined to be its minimal Herbrand model. Relatedly, the fixpoint of the immediate consequence operator may not converge in a finite number of steps (or to a finite set). However, any ground atom in the minimal Herbrand model will have a finite proof tree. This is why Prolog is evaluated top-down. Just as in Datalog, the three semantics can be proven equivalent.

Negation
Logic programming has the desirable property that all three major definitions of the semantics of logic programs agree. In contrast, there are many conflicting proposals for the semantics of logic programs with negation. The source of the disagreement is that logic programs have a unique minimal Herbrand model, but in general, logic programming (or even Datalog) programs with negation do not.

Syntax
Negation is written, and can appear in front of any atom in the body of a rule.

Stratified negation
A logic program with negation is stratified when it is possible to assign each relation to some stratum, such that if a relation $R$ appears negated in the body of a relation $S$, then $R$ is in a lower stratum than $S$. The model-theoretic and fixed-point semantics of Datalog can be extended to handle stratified negation, and such extensions can be proved equivalent.

Many implementations of Datalog use a bottom-up evaluation model inspired by the fixed point semantics. Since this semantics can handle stratified negation, several implementations of Datalog implement stratified negation.

While stratified negation is a common extension to Datalog, there are reasonable programs that cannot be stratified. The following program describes a two-player game where a player wins if their opponent has no moves:

This program is not stratified, but it seems reasonable to think that  should win the game.

Stable model semantics
The stable model semantics define a condition for calling certain Herbrand models of a program stable. Intuitively, stable models are the "possible sets of beliefs that a rational agent might hold, given [the program]" as premises.

A program with negation may have many stable models or no stable models. For instance, the program

has two stable models $$\{p\}$$, $$\{q\}$$. The one-rule program

has no stable models.

Every stable model is a minimal Herbrand model. A Datalog program without negation has a single stable model, which is exactly its minimal Herbrand model. The stable model semantics defines the meaning of a logic program with negation to be its stable model, if there is exactly one. However, it can be useful to investigate (or at least, several) of the stable models of a program; this is the goal of answer set programming.

Further extensions
Several other extensions of Datalog have been proposed and studied, including variants with support for integer constants and functions (including DatalogZ), inequality constraints in the bodies of rules, and aggregate functions.

Constraint logic programming allows for constraints over domains such as the reals or integers to appear in the bodies of rules.