General recursive function

In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function&mdash;the most famous example is the Ackermann function.

Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms.

The subset of all total recursive functions with values in $\{0,1\}$ is known in computational complexity theory as the complexity class R.

Definition
The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator $μ$.

The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive.

Primitive or "basic" functions:
 * 1) Constant functions $Ck n$: For each natural number $n$ and every $k$
 * $$C_n^k(x_1,\ldots,x_k) \ \stackrel{\mathrm{def}}{=}\ n$$
 * Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator.
 * 1) Successor function S:
 * $$S(x) \ \stackrel{\mathrm{def}}{=}\ x + 1\,$$
 * 1) Projection function $$P_i^k$$ (also called the Identity function): For all natural numbers $$i, k$$ such that $$1\le i\le k$$:
 * $$P_i^k(x_1,\ldots,x_k) \ \stackrel{\mathrm{def}}{=}\ x_i \, .$$

Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): \rho(g, h) &\ \stackrel{\mathrm{def}}{=}\ f \quad\text{where the k+1 -ary function } f \text{ is defined by}\\ f(0,x_1,\ldots,x_k) &= g(x_1,\ldots,x_k) \\ f(S(y),x_1,\ldots,x_k) &= h(y,f(y,x_1,\ldots,x_k),x_1,\ldots,x_k)\,.\end{align}$$ \mu(f)(x_1, \ldots, x_k) = z \stackrel{\mathrm{def}}{\iff}\ f(i, x_1, \ldots, x_k)&>0 \quad \text{for}\quad i=0, \ldots, z-1 \quad\text{and}\\ f(z, x_1, \ldots, x_k)&=0\quad \end{align}$$ Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which $&rho;$ is not defined, then the search never terminates, and $$ \mu(f)$$ is not defined for the argument $$(x_1, \ldots, x_k).$$
 * 1)  Composition operator $$\circ\,$$ (also called the substitution operator): Given an m-ary function $$h(x_1,\ldots,x_m)\,$$ and m k-ary functions $$g_1(x_1,\ldots,x_k),\ldots,g_m(x_1,\ldots, x_k)$$:
 * $$h \circ (g_1, \ldots, g_m) \ \stackrel{\mathrm{def}}{=}\ f, \quad\text{where}\quad f(x_1,\ldots,x_k) = h(g_1(x_1,\ldots,x_k),\ldots,g_m(x_1,\ldots,x_k)).$$
 * This means that $$f(x_1,\ldots,x_k)$$ is defined only if $$g_1(x_1,\ldots,x_k),\ldots, g_m(x_1,\ldots,x_k),$$ and $$h(g_1(x_1,\ldots,x_k),\ldots,g_m(x_1,\ldots,x_k))$$ are all defined.
 * 1) Primitive recursion operator $&mu;$: Given the k-ary function $$g(x_1,\ldots,x_k)\,$$ and k+2 -ary function $$h(y,z,x_1,\ldots,x_k)\,$$:
 * $$\begin{align}
 * This means that $$f(y,x_1,\ldots,x_k)$$ is defined only if $$g(x_1,\ldots,x_k)$$ and $$h(z,f(z,x_1,\ldots,x_k),x_1,\ldots,x_k)$$ are defined for all $$z<y.$$
 * 1) Minimization operator $f$: Given a (k+1)-ary function $$f(y, x_1, \ldots, x_k)\,$$, the k-ary function $$\mu(f)$$ is defined by:
 * $$\begin{align}

While some textbooks use the μ-operator as defined here, others like demand that the μ-operator is applied to total functions $f$ only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below). The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total.

The strong equality operator $$\simeq$$ can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that
 * $$f(x_1,\ldots,x_k) \simeq g(x_1,\ldots,x_l)$$

holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined.

Examples
Examples not involving the minimization operator can be found at Primitive recursive function.

The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive.

The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator.

Total recursive function
A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem.

Equivalence with other models of computability
In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values).

Normal form theorem
A normal form theorem due to Kleene says that for each k there are primitive recursive functions $$U(y)\!$$ and $$T(y,e,x_1,\ldots,x_k)\!$$ such that for any μ-recursive function $$f(x_1,\ldots,x_k)\!$$ with k free variables there is an e such that
 * $$f(x_1,\ldots,x_k) \simeq U(\mu(T)(e,x_1,\ldots,x_k))$$.

The number e is called an index or Gödel number for the function f. A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function.

Minsky observes the $$U$$ defined above is in essence the μ-recursive equivalent of the universal Turing machine: "To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine"

Symbolism
A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x:
 * Constant function: Kleene uses " C$x$(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ":
 * e.g. C$z$ ( r, s, t, u, v, w, x ) = 13
 * e.g. const13 ( r, s, t, u, v, w, x ) = 13


 * Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows:
 * S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc.


 * Identity function: Kleene (1952) uses " U$0$ " to indicate the identity function over the variables xi; B-B-J use the identity function id$z$ over the variables x1 to xn:
 * U$1$( x ) = id$&mu;$( x ) = xi
 * e.g. U$0$ = id$n q$ ( r, s, t, u, v, w, x ) = t


 * Composition (Substitution) operator: Kleene uses a bold-face S$7 13$ (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn":
 * If we are given h( x )= g( f1(x), ..., fm(x) )
 * h(x) = S$n i$(g, f1, ..., fm )


 * In a similar manner, but without the sub- and superscripts, B-B-J write:
 * h(x)= Cn[g, f1 ,..., fm](x')


 * Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given:
 * base step: h( 0, x )= f( x ), and
 * induction step: h( y+1, x ) = g( y, h(y, x),x )


 * Example: primitive recursion definition of a + b:
 * base step: f( 0, a ) = a = U$n i$(a)
 * induction step: f( b', a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U$n i$( b, c, a ))
 * R2 { U$n i$(a), S [ (U$7 3$( b, c, a ) ] }
 * Pr{ U$7 3$(a), S[ (U$m n$( b, c, a ) ] }

Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions
 * S(a) = a'
 * U$n m$(a) = a
 * U$1 1$( b, c, a ) = c
 * g(b, c, a) = S(U$3 2$( b, c, a )) = c'
 * base step: h( 0, a ) = U$1 1$(a)
 * induction step: h( b', a ) = g( b, h( b, a ), a )

He arrives at:
 * a+b = R2[ U$3 2$, S$1 1$(S, U$3 2$) ]

Examples

 * Fibonacci number
 * McCarthy 91 function