Rule of inference

In philosophy of logic and logic, a rule of inference, inference rule or transformation rule is a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions).

For example, the rule of inference called modus ponens takes two premises, one in the form "If p then q" and another in the form "p", and returns the conclusion "q". The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other non-classical logics), in the sense that if the premises are true (under an interpretation), then so is the conclusion.

Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inference's action is purely syntactic, and does not need to preserve any semantic property: any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive are important; i.e. rules such that there is an effective procedure for determining whether any given formula is the conclusion of a given set of formulae according to the rule. An example of a rule that is not effective in this sense is the infinitary ω-rule.

Popular rules of inference in propositional logic include modus ponens, modus tollens, and contraposition. First-order predicate logic uses rules of inference to deal with logical quantifiers.

Standard form
In formal logic (and many related areas), rules of inference are usually given in the following standard form:

Premise#1 Premise#2 ...  Premise#n Conclusion

This expression states that whenever in the course of some logical derivation the given premises have been obtained, the specified conclusion can be taken for granted as well. The exact formal language that is used to describe both premises and conclusions depends on the actual context of the derivations. In a simple case, one may use logical formulae, such as in:


 * $$A \to B$$
 * $$\underline{A \quad \quad \quad}\,\!$$
 * $$B\!$$

This is the modus ponens rule of propositional logic. Rules of inference are often formulated as schemata employing metavariables. In the rule (schema) above, the metavariables A and B can be instantiated to any element of the universe (or sometimes, by convention, a restricted subset such as propositions) to form an infinite set of inference rules.

A proof system is formed from a set of rules chained together to form proofs, also called derivations. Any derivation has only one final conclusion, which is the statement proved or derived. If premises are left unsatisfied in the derivation, then the derivation is a proof of a hypothetical statement: "if the premises hold, then the conclusion holds."

Example: Hilbert systems for two propositional logics
In a Hilbert system, the premises and conclusion of the inference rules are simply formulae of some language, usually employing metavariables. For graphical compactness of the presentation and to emphasize the distinction between axioms and rules of inference, this section uses the sequent notation ($$\vdash$$) instead of a vertical presentation of rules. In this notation,

$$\begin{array}{c} \text{Premise } 1 \\ \text{Premise } 2 \\ \hline \text{Conclusion} \end{array}$$

is written as $$(\text{Premise } 1), (\text{Premise } 2) \vdash (\text{Conclusion})$$.

The formal language for classical propositional logic can be expressed using just negation (¬), implication (→) and propositional symbols. A well-known axiomatization, comprising three axiom schemata and one inference rule (modus ponens), is:

(CA1) ⊢ A → (B → A) (CA2) ⊢ (A → (B → C)) → ((A → B) → (A → C)) (CA3) ⊢ (¬A → ¬B) → (B → A) (MP) A, A → B ⊢ B

It may seem redundant to have two notions of inference in this case, ⊢ and →. In classical propositional logic, they indeed coincide; the deduction theorem states that A ⊢ B if and only if ⊢ A → B. There is however a distinction worth emphasizing even in this case: the first notation describes a deduction, that is an activity of passing from sentences to sentences, whereas A → B is simply a formula made with a logical connective, implication in this case. Without an inference rule (like modus ponens in this case), there is no deduction or inference. This point is illustrated in Lewis Carroll's dialogue called "What the Tortoise Said to Achilles", as well as later attempts by Bertrand Russell and Peter Winch to resolve the paradox introduced in the dialogue. For some non-classical logics, the deduction theorem does not hold. For example, the three-valued logic of Łukasiewicz can be axiomatized as:

(CA1) ⊢ A → (B → A) (LA2) ⊢ (A → B) → ((B → C) → (A → C)) (CA3) ⊢ (¬A → ¬B) → (B → A) (LA4) ⊢ ((A → ¬A) → A) → A (MP) A, A → B ⊢ B

This sequence differs from classical logic by the change in axiom 2 and the addition of axiom 4. The classical deduction theorem does not hold for this logic, however a modified form does hold, namely A ⊢ B if and only if ⊢ A → (A → B).

Admissibility and derivability
In a set of rules, an inference rule could be redundant in the sense that it is admissible or derivable. A derivable rule is one whose conclusion can be derived from its premises using the other rules. An admissible rule is one whose conclusion holds whenever the premises hold. All derivable rules are admissible. To appreciate the difference, consider the following set of rules for defining the natural numbers (the judgment $$n\,\,\mathsf{nat}$$ asserts the fact that $$n$$ is a natural number):


 * $$\begin{matrix}

\begin{array}{c}\\ \hline{\mathbf{0} \,\,\mathsf{nat}}\end{array} & \begin{array}{c}{n \,\,\mathsf{nat}} \\ \hline {\mathbf{s(}n\mathbf{)} \,\,\mathsf{nat}} \end{array} \end{matrix}$$

The first rule states that 0 is a natural number, and the second states that s(n) is a natural number if n is. In this proof system, the following rule, demonstrating that the second successor of a natural number is also a natural number, is derivable:


 * $$\begin{array}{c}

{n \,\,\mathsf{nat}} \\ \hline {\mathbf{s(s(}n\mathbf{))} \,\,\mathsf{nat}} \end{array}$$

Its derivation is the composition of two uses of the successor rule above. The following rule for asserting the existence of a predecessor for any nonzero number is merely admissible:


 * $$\begin{array}{c}

{\mathbf{s(}n\mathbf{)} \,\,\mathsf{nat}} \\ \hline {n \,\,\mathsf{nat}} \end{array}$$

This is a true fact of natural numbers, as can be proven by induction. (To prove that this rule is admissible, assume a derivation of the premise and induct on it to produce a derivation of $$n \,\,\mathsf{nat}$$.) However, it is not derivable, because it depends on the structure of the derivation of the premise. Because of this, derivability is stable under additions to the proof system, whereas admissibility is not. To see the difference, suppose the following nonsense rule were added to the proof system:


 * $$\begin{array}{c}\\\hline {\mathbf{s(-3)} \,\,\mathsf{nat}} \end{array}$$

In this new system, the double-successor rule is still derivable. However, the rule for finding the predecessor is no longer admissible, because there is no way to derive $$\mathbf{-3} \,\,\mathsf{nat}$$. The brittleness of admissibility comes from the way it is proved: since the proof can induct on the structure of the derivations of the premises, extensions to the system add new cases to this proof, which may no longer hold.

Admissible rules can be thought of as theorems of a proof system. For instance, in a sequent calculus where cut elimination holds, the cut rule is admissible.