Implicit computational complexity

Implicit computational complexity (ICC) is a subfield of computational complexity theory that characterizes algorithms by constraints on the way in which they are constructed, without reference to a specific underlying machine model or to explicit bounds on computational resources unlike conventional complexity theory. ICC was developed in the 1990s and employs the techniques of proof theory, substructural logic, model theory and recursion theory to prove bounds on the expressive power of high-level formal languages. ICC is also concerned with the practical realization of functional programming languages, language tools and type theory that can control the resource usage of programs in a formally verifiable sense.

Implicit Representations of polynomial time
The important complexity classes of polynomial-time decidable sets (the class P) and polynomial-time computable functions (FP) have received much attention and possess several implicit representations. Two examples follow.

Cons-free programs
Jones defined a programming language that can solve decision problems where the input is a binary string, and showed that a problem can be decided in this language if and only if it is in P.  The language can be briefly described as follows. It receives the input as a list of bits. It has variables that can point to the list and change by application of the "tail" operation so they can move forward on the list. It can define recursive functions, but has no Higher-order functions. Crucially, it has no data-type constructors (hence the name cons-free): the input list is the one and only data structure throughout the program. The lack of constructors limits the computational power of the language, so it is no wonder that it cannot decide all the decidable problems, but the interest in it for Implicit Computational Complexity lies in that it can decide exactly P, and this is independent of the execution time of the programs, that can be exponential. Interestingly, Jones has also shown that if non-determinism is added to the language (as in a Nondeterministic Turing machine), the class of problems that can be accepted is still P.

Function algebras with recursion on notation
Bellantoni and Cook showed that a certain class of functions coincides with FP. These are functions which are defined, like the primitive recursive functions, by a set of base functions and operators for constructing new functions from existing ones. A special recursion scheme is used instead of the primitive recursion scheme, as will be seen below, and in addition, the functions have their arguments partitioned in two "sorts". This is denoted by separating arguments by a semicolon: $$f(x_1,\dots,x_k; x_{k+1},\dots,x_{k+n})$$. The arguments following the semicolon are called safe (a more intuitive name might be "protected"). When a value is passed in a safe position, it is not allowed to grow too much — note the difference between clauses (3) and (4) below. Another important difference to the definition of primitive recursive functions, is that here the arguments are considered as binary strings and we can increase a value by adding a bit (x0 or x1) in contrast to the numeric successor function (x’).

Here is the list of basic functions: {{ordered list|start=1 xi & \text{if }|x|<|z|\\ x & \text{otherwise} \end{array} \right.$$ }}
 * 1= empty string: $$\varepsilon$$ (a zero-ary function)
 * 2= projections: $$\pi_i^{k,n}(x_1, \ldots, x_k; x_{k+1}, \ldots, x_{k+n}) = x_i,$$ for each $$1 \le i \le k+n$$
 * 3= normal binary successors: $$S_i(x;) = x i, \ i \in \{ 0, 1 \}$$
 * 4= bounded safe binary successors: $$S_i(z; x) = \left\{\begin{array}{ll}
 * 5= binary predecessor: $$P(\varepsilon) = \varepsilon, P(x i) = x$$
 * 6= numerical predecessor: $$p(\varepsilon) = \varepsilon,\ p(x') = x$$
 * 7= conditional: $$Q(\varepsilon, y, z_0, z_1) = y, Q(x i, y, z_0, z_1) = z_i$$
 * 8= tally product: $$\times(x, y;) = 1^{|x|\times |y|}$$.

We can combine functions to form new ones using a composition scheme and a recursion scheme. Given $$g, \vec r, \vec s$$, their predicative composition, $$f = g\circ(\vec r; \vec s)$$ is defined by $$f(\vec x; \vec y) = g(\vec r(\vec x;); \vec s(\vec x; \vec y)).$$ Given $$g, h_0, h_1$$, the predicative recursion on notation scheme defines a function $$f$$ by $$\begin{align} f(\varepsilon, \vec x; \vec y) &= g(\vec x; \vec y),\\ f(z i, \vec x; \vec y) &= h_i(z, \vec x; \vec y, f(z, \vec x; \vec y)). \end{align}$$ It is called "recursion on notation", because in each recursive call we strip a bit of the recursion argument, in contrast with recursion "on value" which goes from $z$ to $z$-1.

Example. We define a function $$r(x)$$ that receives a binary string $x$ and returns a string of 0s of the same length as $x$. For readability we omit invocations of the projection functions which are technically necessary to retrieve function argument, e.g., $$\pi^1_1(x)$$ to get $x$ in function $$g(x)$$. $$\begin{align} g(x) &= f(x,x), \text{where}\\ f(\varepsilon, x; ) &= \varepsilon,\\ f(z i, x; ) &= S_0(x; f(z, x;)). \end{align}$$ Note how we need to initially keep a copy of $x$ in order to be able to apply the bounded binary successor operator in the recursion.

Other Classes
Implicit representations have been found for many complexity classes, including the hierarchy of time classes P, EXPTIME, 2-EXPTIME,... and the space classes L, PSPACE, EXPSPACE,... ; as well as the classes of the hierarchy DTIME(O(n)), DSPACE(O(n)),  DTIME($$O(2^n)$$), DSPACE($$O(2^n)$$),... . For most classes, several alternative representations are known.