User:Likebox/Gödel modern proof

Early versions of the proof below were rejected from Godel's incompleteness theorems. I hope they will find a home either there or on more technical pages.

Computational Proof
The theorems talk about a formal system S, which is sufficiently strong. The precise assumptions on S are as follows:

1. S is capable of stating any theorem about the memory state of a computer program. Each clock cycle of a computer takes a certain memory state M to M'=f(M), where the memory contents M can be viewed as a big integer and f is a simple computable function which does one processor step. The theorems that S should be able to state are of the form "for all N, f iterated N times on M does not have the k-th bit zero".

2. S will prove that the value of f(f(f(...f(M))))=M' where M is any initial memory state and M' is the state after a finite number N of steps. For the conclusions of the corollary, it is also necessary to assume that S can prove property 2 is true of itself, S needs to prove that S proves these theorems for all N and some special M. In arithmetic this holds when S has at least enough mathematical induction to prove theorems with one (forall X) in front.

3. S is consistent, meaning it never proves a statement and its negation.

Theorem 1: There are true theorems about the asymptotic behavior of computer programs that S cannot prove.


 * Proof: construct the computer program DEDUCE to print its own code into a variable R, then deduce all consequences of S looking for the theorem R does not halt. If it finds this theorem it halts.


 * If S proves that DEDUCE does not halt, DEDUCE halts. By assumption, S eventually proves all theorems about the finite time behavior of computer programs, so S would also eventually prove that DEDUCE halts, so S would be inconsistent.


 * So DEDUCE does not halt and S does not prove it.

Corollary: S cannot prove its own consistency.


 * Proof: If S proves that DEDUCE does not halt, the previous argument shows that S is inconsistent. So if S proves itself consistent, and if the previous argument can be formalized in S, then S also proves that DEDUCE does not halt, which is impossible.

Theorem 2: S is incomplete, meaning there is a statement that it cannot prove or disprove.


 * Proof: construct program ROSSER to print its own code into a variable R, then deduce all consequences of S looking for either the theorem 1. R does not print anything out or 2. R prints something out. If it finds 1, ROSSER prints "Hello" and halts . If it finds 2, ROSSER halts without printing out anything.

Earlier Version
Godel proved that there is an explicit algorithm to find all the consequences of a set of axioms. This algorithm can, in modern language, be represented as a computer program running on an infinite memory computer. Using the modern notions of computability, it is then easy to prove Godel's incompleteness theorem from basic ideas in computer science.

1. Quining-- any computer program can include a subroutine that prints out the entire program's code. This allows any program to write itself into a string variable, or a large enough bigint.

2. The Halting problem: There does not exist a computer program PREDICT(P) which takes the code to program P and predicts whether P eventually halts.


 * proof: Write program SPITE, which prints itself into variable R, then calculates PREDICT(R). If the answer is R halts, Spite goes into an infinite loop. If the answer is R does not halt, SPITE halts. Since R is really SPITE in disguise, no matter what PREDICT says, the answer is wrong.

The incompleteness theorem: Suppose that an axiom system describes integers (or any other discrete structure), and that it has enough operations (addition and multiplication are sufficient) to describe the working of a computer. This means that the axioms will eventually prove all theorems of the form "Starting with memory contents X, after N steps of running, the memory of the computer will be Y", for any fixed integers X and N, and can state propositions of the form "For all N, the contents of the memory will have property P", where P is some fixed definite property, like "The first bit in memory is zero" or "the tenth bit is different then the third bit". Such an axiom system is either inconsistent or incomplete.
 * proof: Write DEDUCE. DEDUCE first prints its own code into the variable R, then it deduces all consequences of the axiom system. It searches for the theorem R never halts. Only if DEDUCE finds this theorem does it halt.


 * If the axiom system proves that DEDUCE doesn't halt, it is inconsistent, because DEDUCE then halts, and the axioms will also prove that it halts. So if the system is consistent, DEDUCE doesn't halt and the axioms cannot prove it.


 * This argument doesn't explicitly demonstrate the incompleteness, because a consistent axiom system could still prove the $\omega$-inconsistent theorem that DEDUCE halts (even though it doesn't) without contradiction.


 * So write ROSSER: ROSSER prints its code into R, and searches deductions for either 1. R prints something out or 2. R never prints anything out. If it finds 1, it halts without printing anything. If it finds 2, It prints "Hello World!" to the screen and halts.


 * If the axiom system is consistent, it cannot prove either ROSSER eventually prints something nor the negation ROSSER does not print anything. So whatever its conclusions about DEDUCE, the axiom system is incomplete.

To prove the second incompleteness theorem, note that if the axioms are consistent, it is easy to prove that DEDUCE does not halt. This means that if a consistent axiom system proves its own consistency then it will also prove that DEDUCE does not halt, which is impossible. .

Recursion Theory
Below is an incomplete short discussion of some essential results of recursion theory. The point of this is to try to contribute simple proofs of key results to the following pages which are either non-existent or with imprecise mathematical content.


 * 1) Rosser-Gödel theorem
 * 2) Degree theory
 * 3) Halting Problem
 * 4) Priority method
 * 5) Lengths of Proofs
 * 6) Recursion theory
 * 7) Minimal degree

The language is that of computer scientists.

Recursion Theory
Recursion theory is the study of computer programs on an idealized computer with potentially infinite memory. The computer programs should be thought of as written in C on a Random Access Machine, with pointers to memory locations, as in a real computer. To describe a computer with potentially infinite memory, you can imagine that there is a primitive processor instruction MOORE, which doubles the size of pointers and maximizes the adressable memory on demand.

A Random Access Machine can be constructed from a Turing machine, at the cost of increasing the running time of a program. The crucial difference is that a Turing machine must scan its memory tape sequentially, while a Random Access Machine can jump from point to point.

Quines
One of the more important operations in recursion theory is printing your own code. A program that does this is called a quine. In C, a simple quine subroutine is

print_me{s="print_me{s=%c%s%c;printf(s,34,s,34);}";printf(s,34,s,34);}

This immediately implies that any C program can append a subroutine to print itself out. The proof is by construction:


 * OLD CODE
 * print_me{ A = " OLD CODE ";
 * S = "print_me{A = %c%s%c;S=%c%s%c;puts(A);printf(S,34,A,34,34,S,34);}";
 * puts(A);printf(S,34,A,34,34,S,34);}

For simplicity, the old code was assumed quote-free. If it has quotes, replace the quotes with %c in the block, and replace the puts(A) with a printf(A,34...), with as many 34's as there are quotes.

Undecidable Problems
A decision problem is a question about computer programs. A computer program can also be viewed as a big integer, its code in binary, so a decision problem is equivalently a question about integers. The problem is to determine whether the computer program has some property.

For many properties, the question of whether the program has the property or does not have the property is undecidable, meaning that it is impossible to write a computer program to find the answer.

Halting Problem
The most famous example is when the property is The computer program halts. If x is a bigint, the program


 * x=1; while(x>0) { x=x+1;};

doesn't halt, it goes on forever in an infinite loop, while the program


 * printf("hello world");

halts after one line.

It is impossible to write a computer program which tells you which programs halt and which ones do not.


 * Proof: For contradiction, assume PREDICT(R) is a program that takes any other program R and says whether or not it halts.


 * Write SPITE to print its own code into the variable R, and calculate PREDICT(R). If PREDICT says R doesn't halt, SPITE halts. If PREDICT says R halts, SPITE goes into an infinite loop.

Similiarly, it is impossible to write a program to decide any of the following questions:


 * 1) Determine if a program returns 0 or 1.
 * 2) Determine if a program examines its input.
 * 3) Determine if a program runs in polynomial or exponential time on its input.
 * 4) Determine if a program prints anything to the screen.

The halting set K is the set of all integers which are the ASCII for the computer programs that halt. To eventually determine which programs are in K, write a program HALT to list all programs and, as each one comes out, run it on a separate thread. Keep track of which threads halt. This will produce a list of the programs in K, but it cannot produce a list of programs not in K. To correctly list all the programs that do not halt, the program must run for an infinite amount of time.

Computable Functions
A computable function is a subroutine which takes an integer and returns an integer, where an integer can be arbitrarily large, a bigint, not a finite width int. Every subroutine defines a map, or mathematical function, from the integers to the integers. The C declaration of a computable function F is


 * integer F(integer x);

A computable function is total if it returns a value for every argument. It is partial if it only returns a value for certain arguments and for others it goes into an infinite loop. The collection of all total functions is countable, since there are only countably many computer programs. Cantor's diagonal argument then explicitly produces a predicative description of a non-computable function.

The non-computable function described by Cantor is the function F(R) which is defined over all R which are the ASCII description of a total computable function. The value of F(R) is defined as:


 * F(R) = R(R)+1 when R is the ASCII for a total function.
 * F(R) = 0      otherwise

It is easy to see that F(R) is not computable, because if there were a computable function S which described F, whose ASCII code will also be called S, the value of F at every n would be S(n). The value of F at S would therefore have to be S(S), but it is S(S)+1 by construction.

This type of construction might seem naively to be translatable into a computer code for calculating F. It isn't, precisely because determining when R is the ASCII code for a total function requires knowing whether R halts for every value of its argument, which is undecidable. The undecidability theorem was first discovered by A. Turing by a close examination of the effective content of Cantor's diagonal argument applied to all computable real numbers.

The values of the function F(R), which is defined by a predicative description, cannot be calculated by a computer program.

It is possible to express any computer program as a computable function, since the memory of the entire computer can be thought of as an integer, and the program itself a suitable modification of that integer. In principle then, none of these results require explicit code in C. All computable functions can be defined by iterating and composing more primitive functions. This is the traditional way to express recursion theory arguments.

Running Time
The proof of undecidability provides a running time bound on programs which halt. There are programs of length M bytes which only halt after running for a time which is longer than any computable function of M.


 * proof: Suppose that there were a computable function F(M) which has the property that F(M) is bigger than the running time of any program of length M. This allows a program to solve the halting problem. Modify HALT to run each program of length M for a time F(M) to see if it halts. Since F(M) is greater than the time to halting, this solves the halting problem and this is a contradiction.

This proof can be turned into an explicit construction of a program which explicitly violates any assumed running time bound.


 * construction: Given any computable function F, write a program which prints its code into a variable R, calculates its own length M, and runs for F(M)+1 steps.

This bound was first formulated by Gödel as a theorem about lengths of proofs in an axiomatic system.

Relation With Gödel's Incompletness Theorem
Gödel's Incompleteness theorem is about axiomatic systems in first-order logic. Since axiomatic systems can be identified with the programs that list all deductions of these systems, the theorem can be alternately stated as theorems about deduction programs.

The precise assumptions on the axiom system S are the following:


 * 1) S proves every calculational theorem. If a computer is constructed which takes the memory contents M to F(M) at every clock cycle, where F is a simple function that takes the memory one step in time, S must prove all true theorems of the form $$F^N(M)=M'$$. Alternatively and equivalently, S eventually proves any theorem that declares the correct value of any computable function at any argument.
 * 2) S can state theorems about the memory contents of a computer program after arbitrarily many steps. Theorems of the form $$\forall_N F^N(M)_k \ne 0 $$, where the subscript means k-th bit. The k-th bit of memory starting from a specified initial memory state M never becomes zero.
 * 3) S is consistent, meaning it never proves both a statement and its negation.


 * Gödel's incompletness theorem: There is a true theorem about computer programs which S cannot prove.


 * proof: Construct DEDUCE. DEDUCE lists all consequences of the axioms one after the other, looking for the theorem that DEDUCE does not halt. If it finds this theorem, it halts.


 * S cannot prove that DEDUCE does not halt, since if it does, DEDUCE halts, and S will prove that too by property 1.


 * Gödel's incompletness Theorem: (Gödel/Rosser) S is incomplete. There is a proposition P which S cannot prove or disprove.


 * proof: Construct ROSSER to list all consequences of the axioms looking for either P: ROSSER prints something out or notP: ROSSER does not print anything out. If it finds P, it halts without printing anything. If it finds notP, it halts after printing Hello World! to the screen.


 * By construction, a consistent axiom system can neither prove P or notP.

Assuming that the axioms are consistent, notP is the true option--- ROSSER is not going to halt, so it isn't going to print anything out. Likewise, if S is consistent DEDUCE does not halt.

This immediately suggests a corollary:


 * Corollary: S cannot prove its own consistency.


 * proof: If S is consistent, DEDUCE does not halt, by the previous argument. So if the previous argument can be formalized within S and S proved its own consistency, it would then prove that DEDUCE does not halt, which is impossible.

The corollary requires an additional assumption, which is that the argument can be formalized within S. This requires that:


 * 4. S can prove property 1 about itself.

Property 4 is satisfied in arithmetic with induction limited to statements with one quantifier.

Lengths of Proofs
The running time bound proved for programs translates into a statement about the lengths of proofs. Gödel established two theorems about proof-lengths in 1934.

For any theorem P, define L(P) as the length of the shortest proof of P. Redundantly define L(m) to also be the largest value of L(P) for P whose ASCII is m bytes or less. L(m) is the length of the proof of the statement of length <m which is easiest to state and hardest to prove.

A theorem whose statement is short but whose shortest proof is long will be called hard. The hardness of a theorem is the value of L(m) ( or L(m)-m, or L(m)/m, or any other minor computable modification).


 * Theorem: (Gödel) L(m) grows faster than any computable function.


 * Proof: Assume that L(m)<F(m) where F is computable. Then the problem of deciding which propositions are provable is solved by enumerating all proofs of length <F(M). This also solves the halting problem, since among the provable theorems are the statements P halts for all halting programs P.

This proof can be turned into an explicit construction of a theorem of S with a very long proof.


 * Construction: Consider NDEDUCE. NDEDUCE deduces all theorems of length <F(N) looking for the theorem NDEDUCE returns 0. If it finds this theorem, it returns 1. If it doesn't, it returns 0. The value of N is the length of the statement NDEDUCE returns 0, which NDEDUCE constructs by first printing its own code.

S will prove that NDEDUCE returns zero, but if S is consistent, S cannot prove NDEDUCE returns 0 with a proof which is shorter than F(N) bytes, where N is the length of the statement.

This theorem establishes rigorously that there are theorems which are easy to state but arbitrarily hard to prove in any mathematical system.


 * Corollary: (Gödel) There are theorems whose proof in S+consis(S) are shorter than their proof in S by an amount greater than any computable function.


 * Proof:The explicit construction shows that the theorem NDEDUCE returns 0 only has a very long proof in S, but it can be proven by a short proof if the consistency of S is assumed.

This corollary is a special case of the second Gödel theorem on the length of proofs.

Assume that S has an extension to S+A, where A is an additional axiom. Suppose that S+A is stronger than S, meaning that there is at least one program LOOP for which S+A proves LOOP does not halt but S does not.


 * Construction: Consider HALT_TEST(N), which emulates LOOP for N clock cycles, testing to see if it halts. If LOOP halts before N cycles elapse, HALT_TEST returns 0, otherwise it returns 1.

Note that:
 * 1) In system S, the length of the proof of the statement HALT_TEST(N) returns 1 grows with N.
 * 2) In S+A the length of the proof is almost constant, since S+A proves that $$\forall N HALT_TEST(N) returns 1$$.


 * proof: (this was faulty)


 * construction: The method of proof defines a computable function F(N) which has the property that the length of the proof of HALT_TEST(F(N)) returns 1 is greater than N.


 * Theorem: (Gödel) There are theorems of S whose proof is shorter in S+A by an amount greater than the value of any computable function.


 * Proof: Consider HALT_TEST(F(G(N)) which is HALT_TEST(F(N)) run for a computable G(N) steps, where N is the length of HALT_TEST. This program returns 1 and the length of the proof in S is of length G(N). The length of the proof in S+A is the length of the code of G(N) plus a constant.

The interpretation of this theorem is that adding stronger axioms can simplify proofs of already proven theorems by an arbitrarily great amount. With better axioms, many hard theorems become easy.

Turing Degrees
A Turing degree is defined by adding an idealized infinite CD-ROM to the computer. Such a CD-ROM is called an oracle. No matter what CD-ROM is given, the halting problem with the CD-ROM is still undecidable,as long as SPITE has the same access to the CD-ROM as PREDICT.

The blank oracle, which contains no data, is called 0. The oracle which contains a list of all the halting programs is called the halting oracle, and it is called 0'. When given access to 0', the halting problem for programs with oracle 0 is solved, but the halting problem for programs with oracle 0' is not.

Each oracle defines a turing degree. Two oracles are called Turing equivalent if the entire information on one CD-ROM can be printed out using the second one and vice-versa. If turing degree A can be printed out from turing degree B (but not conversely), then $$\scriptstyle A\le B$$, or A is weaker than B, or A is Turing reducible to B.

The set of all oracles under Turing equivalence forms the collection of Turing degrees, and these are a partially ordered set of cardinality equal to that of the continuum. Given two Turing degrees A and B, one can interleave the CD-ROM's of the two to form $$\scriptstyle A\oplus B$$.

Despite the fact that the Turing degrees are essentially just the set of real numbers, it is traditional to refer to the Turing degrees as a class. This distinction is entirely philosophical. If the set of all undecidable problems is included in the set of all real numbers, no finite axiom system could ever properly describe all the properties of an arbitrarily real number, let alone the set of all real numbers. Even though it is traditional in other branches of mathematics to treat the real numbers as a set, the real numbers in recursion theory are respectfully elevated to a proper class.

Reducibility
Sometimes the undecidable problem B can be solved given the answer to another undecidable problem A. When this happens, B is reducible to A. The strongest notion of reducibility is Turing reducibility, which means that the problem B can be solved once an unbounded number of answers to a question of type A are given. But there are weaker notions of reducibility.


 * 1) If the answer to B can be computed with the exactly one question to the A oracle, B is one/one reducible to A.


 * 1) If the answer to B can be computed with a bounded number of questions to the A oracle, B is said to be m-reducible to A.