Pure inductive logic

Pure inductive logic (PIL) is the area of mathematical logic concerned with the philosophical and mathematical foundations of probabilistic inductive reasoning. It combines classical predicate logic and probability theory (Bayesian inference). Probability values are assigned to sentences of a first-order relational language to represent degrees of belief that should be held by a rational agent. Conditional probability values represent degrees of belief based on the assumption of some received evidence.

PIL studies prior probability functions on the set of sentences and evaluates the rationality of such prior probability functions through principles that such functions should arguably satisfy. Each of the principles directs the function to assign probability values and conditional probability values to sentences in some respect rationally. Not all desirable principles of PIL are compatible, so no prior probability function exists that satisfies them all. Some prior probability functions however are distinguished through satisfying an important collection of principles.

History
Inductive logic started to take a clearer shape in the early 20th century in the work of William Ernest Johnson and John Maynard Keynes, and was further developed by Rudolf Carnap. Carnap introduced the distinction between pure and applied inductive logic, and the modern Pure Inductive Logic evolves along the lines of the pure, uninterpreted approach envisaged by Carnap.

General case
In its basic form, PIL uses first-order logic without equality, with the usual connectives $$\wedge, \vee, \neg, \to$$ (and, or, not and implies respectively), quantifiers $$\exist, \forall,$$ finitely many predicate (relation) symbols, and countably many constant symbols $$a_1, a_2, a_3, \ldots \,$$.

There are no function symbols. The predicate symbols can be unary, binary or of higher arities. The finite set of predicate symbols may vary while the rest of the language is fixed. It is a convention to refer to the language as $$L$$ and write


 * $$L = \{R_1, R_2, \ldots, R_q\}$$

where the $$R_i$$ list the predicate symbols. The set of all sentences is denoted $$SL$$. If a sentence is written with constants appearing in it listed then it is assumed that the list includes at least all those that appear. $${\cal T}L$$ is the set of structures for $$L$$ with universe $$\{a_1, a_2, a_3, \ldots\}$$ and with each constant symbol $$a_i$$ interpreted as itself.

A probability function for sentences of $$L$$ is a function $$w$$ with domain $$SL$$ and values in the unit interval $$[0,1]$$ satisfying the following conditions:


 * – any logically valid sentence $$\theta$$ has probability $$1\!:\,$$ $$w(\theta)=1$$


 * – if sentences $$\theta$$ and $$\phi$$ are mutually exclusive then $$w(\theta \vee \phi)= w(\theta) + w(\phi)$$


 * – for a formula $$\psi(x)$$ with one free variable the probability of $$\exists x \, \psi(x)$$ is the limit of probabilities of $$\psi(a_1) \vee \psi(a_2) \vee \ldots \vee \psi(a_n)$$ as $$n$$ tends to $$\infty$$.

This last condition, which goes beyond the standard Kolmogorov axioms (for finite additivity) is referred to as Gaifman's Axiom and it is intended to capture the idea that the $$a_i$$ exhaust the universe.

For a probability function $$w$$ and a sentence $$\phi$$ with $$w(\phi)>0$$, the corresponding conditional probability function $$ w(\,. |\, \phi)$$ is defined by


 * $$w(\theta \mid \phi) = \frac{w(\theta \wedge \varphi)}{w(\varphi)} \quad\ (\theta \in SL).$$

Unlike belief functions in many valued logics, it is not the case that the probability value of a compound sentence is determined by the probability values of its components. Probability respects the classical semantics: logically equivalent sentences must be given the same probability. Hence logically equivalent sentences are often identified.

A state description for a finite set of constants is a conjunction of atomic sentences (predicates or their negations) instantiated exclusively by these constants, such that for any eligible atomic sentence either it or its negation (but not both) appears in the conjunction.

Any probability function is uniquely determined by its values on state descriptions. To define a probability function, it suffices to specify nonnegative values of all state descriptions for $$a_1, \ldots,a_n$$ (for all $$n$$) so that the values of all state descriptions for $$a_1, \ldots,a_n, a_{n+1}$$ extending a given state description for $$a_1, \ldots,a_n$$ sum to the value of the state description they all extend, with the convention that the (only) state description for no constants is a tautology and that has value $$1$$.

If $$\Theta$$ is a state description for a set of constants including $$a_i,a_j$$ then it is said that $$a_i,a_j$$ are indistinguishable in $$\Theta$$, $$a_i \sim_\Theta a_j$$, just when upon adding equality to the language (and axioms of equality to the logic) the sentence $$\Theta \wedge a_i=a_j$$ is consistent. $$\,\sim_\Theta$$ is an equivalence relation.

Unary case
In the special case of Unary PIL, all the predicates $$R_1, \ldots, R_q$$ are unary. Formulae of the form
 * $$\beta(x) = \pm R_1(x)\wedge \pm R_2(x) \wedge \ldots \wedge \pm R_q(x)$$

where $$\pm R $$ stands for one of $$R$$, $$\neg R$$, are called atoms. It is assumed that they are listed in some fixed order as $$\beta_1, \beta_2,\ldots, \beta_{2^q}$$.

A state description specifies an atom for each constant involved in it, and it can be written as a conjunction of these atoms instantiated by the corresponding constants. Two constants are indistinguishable in the state description if it specifies the same atom for both of them.

Central question
Assume a rational agent inhabits a structure in $${\cal T}L$$ but knows nothing about which one it is. What probability function $$w$$ should s/he adopt when $$w(\theta)$$ is to represent his/her degree of belief that a sentence $$\theta$$ is true in this ambient structure?

General rational principles
The following principles have been proposed as desirable properties of a rational prior probability function $$w$$ for $$L$$.

The constant exchangeability principle, Ex. The probability of a sentence $$\theta(a_1,a_2, \ldots, a_m) $$ does not change when the $$a_1, a_2, \ldots, a_m$$ in it are replaced by any other $$m$$-tuple of (distinct) constants.

The principle of predicate exchangeability, Px. If $$R,R'$$ are predicates of the same arity then for a sentence $$\theta$$,
 * $$w(\theta)=w(\theta')$$

where $$\theta'$$ is the result of simultaneously replacing $$R$$ by $$R'$$ and $$R'$$ by $$R$$ throughout $$\theta$$.

The strong negation principle, SN. For a predicate $$R$$ and sentence $$\theta $$,
 * $$w(\theta)=w(\theta')$$

where $$\theta'$$ is the result of simultaneously replacing $$R$$ by $$\neg R$$ and $$\neg R$$ by $$R$$ throughout $$\theta$$.

The principle of regularity, Reg. If a quantifier-free sentence $$\theta $$ is satisfiable then $$w(\theta) >0$$. The principle of super regularity (universal certainty), SReg. If a sentence $$\theta $$ is satisfiable then $$w(\theta) >0$$.

The constant irrelevance principle, IP. If sentences $$\theta, \phi $$ have no constants in common then $$w(\theta \wedge \phi) = w(\theta) \cdot w(\phi)$$.

The weak irrelevance principle, WIP. If sentences $$\theta, \phi $$ have no constants nor predicates in common then $$w(\theta \wedge \phi) = w(\theta) \cdot w(\phi)$$.

Language invariance principle, Li. There is a family of probability functions $$w^{J}$$, one on each language $$J$$, all satisfying Px and Ex, and such that $$w^L=w$$ and if all predicates of $$J$$ belong also to $$K$$ then $$w^J$$ and $$w^K$$ agree on sentences of $$J$$.

The (strong) counterpart principle, CP. If $$\theta, \theta' $$ are sentences such that $$\theta'$$ is the result of replacing some constant/relation symbols in $$\theta$$ by new constant/relation symbols of the same arity not occurring in $$\theta$$ then
 * $$w(\theta \mid \theta') \geq w(\theta). $$

(SCP) If moreover $$\theta''$$ is the result of replacing the same and possibly also additional constant/relation symbols in $$\theta$$ by new constant/relation symbols of the same arity not occurring in $$\theta$$ then
 * $$w(\theta \mid \theta') \geq w(\theta \mid \theta'') \geq w(\theta). $$

The Invariance Principle, INV. If $$F$$ is an isomorphism of the Lindenbaum-Tarski algebra of sentences of $$L$$ supported by some permutation $$\mu$$ of $${\cal T} L$$ in the sense that for sentences $$\theta, \phi$$,
 * $$F([\theta]) = [\phi]~$$ just when $$~ M \models \theta \Longleftrightarrow \mu(M) \models \phi$$

then $$w(\theta) = w(\phi)$$.

The Permutation Invariance Principle, PIP. As INV except that $$F$$ is additionally required to map (equivalence classes of) state descriptions to (equivalence classes of) state descriptions.

The Spectrum Exchangeability Principle, Sx. The probability $$w(\Theta)$$ of a state description $$\Theta$$ depends only on the spectrum of $$\Theta$$, that is, on the multiset of sizes of equivalence classes with respect to the equivalence relation $$\sim_\Theta$$.

Li with Sx. As the Language Invariance Principle but all the probability functions in the family also satisfy Spectrum Exchangeability.

The Principle of Induction, PI. Let $$\Theta $$ be a state description and $$a_k$$ a constant not appearing in $$\Theta$$. Let $$\Phi$$, $$\Psi$$ be state descriptions extending $$\Theta$$ to include (just) $$a_k$$. If $$a_k$$ is $$\sim_\Phi$$-equivalent to some and at least as many constants as it is $$\sim_\Psi$$-equivalent to then $$w(\Phi\mid \Theta) \geq w(\Psi \mid \Theta)$$.

Further rational principles for unary PIL
The Principle of Instantial Relevance, PIR. For a sentence $$\theta$$, atom $$\beta$$ and constants $$a_k,a_m$$ not appearing in $$\theta$$,
 * $$w(\beta(a_k) \mid \beta(a_m) \wedge \theta) \geq w(\beta(a_k) \mid \theta)$$.

The Generalized Principle of Instantial Relevance, GPIR. For quantifier-free sentences $$\psi(a_k), \phi(a_m), \theta $$ with constants $$a_k,a_m$$ not appearing in $$\theta$$, if $$\psi(x) \models \phi(x)$$ then
 * $$ w( \psi(a_{k}) \mid \phi(a_{m}) \wedge \theta) \geq w( \psi(a_{k}) \mid \theta).$$

Johnson Sufficientness Principle, JSP. For a state description $$\Theta$$ for $$n$$ constants, atom $$\beta$$ and constant $$a_k$$ not appearing in $$\Theta$$, the probability
 * $$w(\beta(a_k)\mid \Theta)$$

depends only on $$n$$ and on the number of constants for which $$\Theta$$ specifies $$\beta$$.

The Principle of Atom Exchangeability, Ax. If $$\tau$$ is a permutation of $$\{1,2, \ldots, 2^q\}$$ and $$\Theta$$ is a state description expressed as a conjunction of instantiated atoms then $$w(\Theta)=w(\Theta')$$ where $$\Theta'$$ obtains from $$\Theta$$ upon replacing each $$\beta_i$$ by $$\beta_{\tau(i)}$$.

Reichenbach's Axiom, RA. Let $$ \beta_{h_i}$$ for $$i=1,2,3,\ldots$$ be an infinite sequence of atoms and $$\beta$$ an atom. Then as $$n $$ tends to $$\infty$$, the difference between the conditional probability
 * $$w(\beta(a_{n+1}) \mid \beta_{h_1}(a_1) \wedge \beta_{h_2}(a_2) \wedge \ldots \wedge \beta_{h_n}(a_n))$$

and the proportion of occurrences of $$\beta$$ amongst the $$\beta_{h_1}, \beta_{h_2}, \ldots ,\beta_{h_n}$$ tends to $$0$$.

Principle of Induction for Unary languages, UPI. For a state description $$\Theta$$, atoms $$\beta_i, \beta_j$$ and constant $$a_k$$ not appearing in $$\Theta$$, if $$\Theta$$ specifies $$\beta_i$$ for at least as many constants as $$\beta_j$$ then
 * $$w(\beta_i(a_k)\mid \Theta) \geq w(\beta_j(a_k)\mid \Theta).$$

Recovery. Whenever $$\Psi(a_1,a_2, \ldots, a_n)$$ is a state description then there is another state description $$\Phi(a_{n+1}, a_{n+2}, \ldots, a_{h})$$ such that $$w(\Phi \wedge \Psi) \neq 0$$ and for any quantifier-free sentence $$\theta(a_{h+1}, a_{h+2}, \ldots, a_{h+g})$$,
 * $$w(\theta(a_{h+1}, a_{h+2}, \ldots, a_{h+g})\,|\,\Phi \wedge \Psi) = w(\theta(a_{h+1}, a_{h+2}, \ldots, a_{h+g})).$$

Unary Language Invariance Principle, ULi. As Li, but with the languages restricted to the unary ones.

ULi with Ax. As ULi but with all the probability functions in the family also satisfying Atom Exchangeability.

General Case
Sx implies Ex, Px and SN.

PIP + Ex implies Sx.

INV implies PIP and Ex. Li implies CP and SCP.

Li with Sx implies PI.

Unary case
Ex implies PIR.

Ax is equivalent to PIP.

Ax+Ex implies UPI.

Ax+Ex is equivalent to Sx.

ULi with Ax implies Li with Sx.

General probability functions
Functions $$V_M$$. For a given structure $$M \in {\cal T} L$$ and $$\theta \in SL$$,
 * $$V_M(\theta)= \left\{ \begin{array}{ll} 1& {\rm if}~ M\models \theta,\\ 0&{\rm otherwise}.\end{array} \right.$$

Functions $$\omega^{\Psi}$$. For a given state description $$\Psi(a_1,a_2, \ldots, a_K)$$, $$\,\omega^{\Psi}$$ is defined via specifying its values for state descriptions as follows. $$\,\omega^{\Psi}(\Theta(a_1,a_2, \ldots, a_n))$$ is the probability that when $$a_{h_1},a_{h_2}, \ldots, a_{h_n}$$ are randomly picked from $$\{a_1, \ldots,a_K\}$$, with replacement and according to the uniform distribution, then $$ \Psi(a_1, \ldots, a_K) \models \Theta(a_{h_1}, a_{h_2}, \ldots, a_{h_n}).$$

'''Functions $$^\circ \! (\omega^\Psi)$$'''. As above but employing a non-standard universe (starting with a possibly non-standard state description $$\Psi$$) to obtain the standard $$^\circ \! (\omega^\Psi)$$.

$$\bullet $$ The $$^\circ \! (\omega^{\Psi})$$ are the only probability functions that satisfy Ex and IP.

Functions $$u^{\overline{p}}$$. For a given infinite sequence $$\overline{p} = \langle p_0,p_1,p_2,p_3, \ldots \rangle$$ of non-negative real numbers such that
 * $$p_1 \geq p_2 \geq p_3 \geq \ldots \geq 0\, \, $$ and $$~\sum_{i=0}^\infty p_i = 1$$,

$$u^{\overline{p}}$$ is defined via specifying its values for state descriptions as follows:

For a sequence $$\vec{c} = \langle c_1,c_2, \ldots, c_n\rangle$$ of natural numbers and a state description $$\Theta(a_{1}, a_{2}, \ldots, a_{n})$$, $$\Theta$$ is consistent with $$\vec{c}$$ if whenever $$c_s=c_t \neq 0$$ then $$a_{s} \sim_\Theta a_{t}$$. $$C(\vec{c})$$ is the number of state descriptions for $$a_{1}, a_{2}, \ldots, a_{n}$$ consistent with $$\vec{c}$$. $$\,u^{\overline{p}}(\Theta) $$ is the sum over those $$\vec{c} $$ with which $$\Theta$$ is compatible, of
 * $$ C(\vec{c})^{-1} \prod_{s=1}^n p_{c_s}.$$

$$\bullet $$ The $$u^{\overline{p}}$$ are the only probability functions that satisfy WIP and Li with Sx. (The language invariant family witnessing Li with Sx consists of the functions $$u^{\overline{p}, J}$$ with fixed $$\overline{p}$$, where $$u^{\overline{p}, J}$$ is as $$u^{\overline{p}}$$ but defined with language $$J$$.)

Further probability functions (unary PIL)
Functions $$w$$$\vec{c}$. For a vector $$\vec{c} = \langle c_1,c_2, \ldots, c_{2^q}\rangle$$ of non-negative real numbers summing to one, $$w$$$\vec{c}$ is defined via specifying its values for state descriptions as follows:
 * $$w$$$\vec{c}$$$(\Theta )= \prod_{j=1}^{2^q} c_{j}^{m_j}$$

where $$m_j$$ the is number of constants for which $$\Theta$$ specifies $$\beta_j$$.

$$\bullet $$ The $$w$$$\vec{c}$ are the only probability functions that satisfy Ex and IP (they are also expressible as $$^\circ \! (w^{\Psi})$$ ).

Carnap continuum functions $$c_{\lambda}.\,$$ For $$\lambda>0$$, the probability function $$c_\lambda$$ is uniquely determined by the values
 * $$c_\lambda(\beta_j(a_{n+1}) \mid \Theta) = \frac{m_j + \lambda2^{-q}}{n + \lambda}$$

where $$\Theta$$ is a state description for $$n$$ constants not including $$a_k$$ and $$m_j$$ is the number of constants for which $$\Theta$$ specifies $$\beta_j$$.

Furthermore, $$c_\infty$$ is the probability function that assigns $$2^{-nq}$$ to every state description for $$n$$ constants and $$c_0$$ is the probability function that assigns $$2^{-q} $$ to any state description in which all constants are indistinguishable, $$0$$ to any other state description.

$$\bullet$$ The $$c_\lambda$$ are the only probability functions that satisfy Ex and JSP.

$$\bullet$$ They also satisfy Li – the functions $$c^{J}_\lambda$$ with fixed $$\lambda$$, where $$c^{J}_\lambda$$ is as $$c_\lambda$$ but defined with language $$J$$ provide the unary language-invariant family members.

Functions $$w^{\delta}$$. For $$-(2^q-1)^{-1} \leq \delta \leq 1$$, $$w^{\delta}$$ is the average of the $$2^q$$ functions $$w$$$\vec{c}$ where $$\vec{c}$$ has all but one coordinate equal to each other with the odd coordinate differing from them by $$\delta$$, so
 * $$ w^\delta= 2^{-q} \sum_{i=1}^{2^q} $$$$w$$$\vec{e_i}$

where $$\vec{e_i} = \langle \gamma, \gamma, \ldots, \gamma, \gamma + \delta, \gamma, \ldots, \gamma \rangle ~$$, ($$\gamma+\delta$$ in $$i$$th place) and $$\gamma = 2^{-q}(1-\delta)$$.

For $$0\leq \delta \leq 1$$, the $$w^{\delta}$$ are equal to $$u^{\bar{p}}$$ for
 * $$\bar{p} = \langle 1-\delta, \delta, 0,0,0,\ldots \rangle$$

and as such they satisfy Li.

$$\bullet $$ The $$w^{\delta}$$ are the only functions that satisfy GPIR, Ex, Ax and Reg.

$$\bullet $$ The $$w^{\delta}$$ with $$0\leq\delta <1$$ are the only functions that satisfy Recovery, Reg and ULi with Ax.

Representation theorems
A representation theorem for a class of probability functions provides means of expressing every probability function in the class in terms of generic, relatively simple probability functions from the same class.

Representation Theorem for all probability functions. Every probability function $$w$$ for $$L$$ can be represented as
 * $$w= \int_{{\cal T} L} V_M \,d\mu(M)$$

where $$\mu$$ is a $$\sigma$$-additive measure on the $$\sigma$$-algebra of subsets of $${\cal T} L$$ generated by the sets
 * $$\{\, M \in {\cal T} L \mid M \vDash \theta\,\} ~ (\theta \in SL).$$

Representation Theorem for Ex (employing non-standard analysis and Loeb Integration Theory ). Every probability function $$w$$ for $$L$$ satisfying Ex can be represented as
 * $$w = \int_A \,^\circ\!(\omega^{\Psi}) \, d\mu(\Psi)$$

where $$A$$ is an internal set of state descriptions for $$a_1, a_2, \ldots, a_\nu$$ (with $$\nu$$ a fixed infinite natural number) and $$\mu$$ is a $$\sigma$$-additive measure on a $$\sigma$$-algebra of subsets of $$A$$.

Representation Theorem for Li with Sx. Every probability function $$w$$ for $$L$$ satisfying Li with Sx can be represented as
 * $$w = \int_{\mathbb B} \,u^{\overline{p}}\, d\mu(\overline{p}) $$

where $${\mathbb B}$$ is the set of sequences
 * $$\overline{p} = \langle p_0,p_1,p_2,p_3, \ldots \rangle$$

of non-negative reals summing to $$1$$ and such that $$p_1 \geq p_2 \geq p_3 \geq \ldots \,\geq 0 \,$$ and $$\mu$$ is a $$\sigma$$-additive measure on the Borel subsets of $${\mathbb B}$$ in the product topology.

de Finetti's Representation Theorem (unary). In the unary case (where $$L$$ is a language containing $$q$$ unary predicates), the representation theorem for Ex is equivalent to:

Every probability function $$w$$ for $$L$$ satisfying Ex can be represented as
 * $$ w= \int_{\mathbb D} w_{\vec{x}}\, d\mu(\vec{x}).$$

where $${\mathbb D}$$ is the set of vectors $$\vec{x} = \langle x_1,x_2, \ldots, x_{2^q}\rangle$$ of non-negative real numbers summing to one and $$\mu$$ is a $$\sigma$$-additive measure on $${\mathbb D}$$.