User:Likebox/Sandbox

=Proposed Continuum Hypothesis Page=

In mathematics, the continuum hypothesis (CH) is the assertion that any infinite set which is too big to be matched one-to-one with the integers has at least as many elements as the set of all real numbers. It is often written formally:

2^{\aleph_0} = \aleph_1 $$

The generalized continuum hypothesis (GCH) asserts that the smallest infinite set which is too big to be matched one-to-one to an infinite set S can be matched one-to-one with the set of all subsets of S. It is written

2^{\aleph_\alpha} = \aleph_{\alpha+1} $$ For all ordinals $$\alpha$$.

The continuum hypothesis was formulated by Georg Cantor as a conjecture in his theory of ordinal and cardinal infinities. David Hilbert listed it as the first of his 23 problems for 20th century mathematics.

In 1940 Kurt Godel, extending earlier ideas of Hilbert, proved that it is consistent with standard set theory that GCH is true.

In 1963, Paul Cohen proved that it is also consistent with standard set theory to assume that CH is false. Cohen's method made it clear that CH and GCH are also unprovable in any extension of set theory which only adds axioms which assert the consistency of previous axioms.

Because the CH is independent of standard axiom systems, some mathematicians consider it an undecidable proposition with no definite truth value. Probabilistic intuition leads many to also view it as more false than true. Mathematicians with a more platonist view consider it an open question, to be settled by new axioms.

In set theory and logic, Hilbert's problem remains an active topic of contemporary research (see Woodin 2001a).

Introduction
The continuum hypothesis was a conjecture due to Georg Cantor, about the possible sizes of infinite sets.

Cantor introduced two different measures to quantify how large infinite sets could be. The first is the ordinal number, which measures the size of a linearly ordered sequence by the number and type of limit points that it contains. The second is the cardinal number, which compares the size of sets using one-to-one maps.

Ordinals naturally form a linear hierarchy of ever increasing size, but all the infinite ordinals which can be reached by discrete steps of incrementing and accumulating have the same size considered as cardinal numbers; they are all countable. But it is possible to consider all of the countable ordinals together as one big ordinal, which Cantor called $$\aleph_1$$. Cantor noted that this set, the set of all countable ordinals, could not itself be countable.

When considering cardinal numbers, Cantor proved that the set of real numbers is uncountable too, but by a completely different argument.

So it was natural for him to ask whether the two uncountable sets are the same. Cantor asked whether the cardinality of the real numbers $$c$$ was equal to the cardinality of the set of all countable ordinals $$\aleph_1$$.

This is the continuum hypothesis. Cantor believed that it was true, and spent many years fruitlessly trying to prove it.

Reference
The cardinality of the integers is $\aleph_0$. and the cardinality of the real numbers is $$2^{\aleph_0}$$.

The continuum hypothesis says that there is no $$S$$ with
 * $$ \aleph_0 < |S| < 2^{\aleph_0}.$$

Assuming the axiom of choice, there is a smallest cardinal number $$\aleph_1$$ greater than $$\aleph_0$$, and the continuum hypothesis is in turn equivalent to the equality
 * $$2^{\aleph_0} = \aleph_1.$$

The generalized continuum hypothesis (GCH) says: $$2^{\aleph_\alpha} = \aleph_{\alpha+1}.$$ for all ordinals $$\alpha$$

Sierpiński proved that ZF + GCH implies the axiom of choice (AC), so choice and GCH are not entirly independent in ZF; there are no models of ZF in which GCH holds and AC fails.

Easton used forcing to prove Easton's theorem: it is consistent with ZFC for arbitrarily large cardinals $$\aleph_\alpha$$ to fail to satisfy $$2^{\aleph_\alpha} = \aleph_{\alpha + 1}$$.

Implications of GCH for cardinal exponentiation
For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B,
 * $$A < B \to 2^A \le 2^B$$.

If A and B are finite, the stronger inequality
 * $$A < B \to 2^A < 2^B \!$$

holds. GCH implies that this strict inequality holds for infinite cardinals as well as finite cardinals.

Although the Generalized Continuum Hypothesis refers to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation in all cases. It implies that $$\aleph_{\alpha}^{\aleph_{\beta}}$$ is:
 * $$\aleph_{\beta+1}$$ when α ≤ β+1;
 * $$\aleph_{\alpha}$$ when β+1 < α and the exponent is less than the cofinality of the base; and
 * $$\aleph_{\alpha+1}$$ when β+1 < α and the exponent is greater or equal to the cofinality of the base.

For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B,
 * $$A < B \to 2^A \le 2^B$$.

If A and B are finite, the stronger inequality
 * $$A < B \to 2^A < 2^B \!$$

holds. GCH implies that this strict inequality holds for infinite cardinals as well as finite cardinals.

Impossibility of proof and disproof (in ZFC)
The continuum hypothesis was the first problem on David Hilbert's list of important open questions that was presented at the International Mathematical Congress in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated.

Kurt Gödel showed in 1940 that the continuum hypothesis (CH for short) cannot be disproved from the standard Zermelo-Fraenkel set theory (ZFC), even if the axiom of choice is adopted. Paul Cohen showed in 1963 that CH cannot be proven from those same axioms either. Hence, CH is independent of ZFC.

The continuum hypothesis was not the first statement shown to be independent of ZFC. An immediate consequence of Gödel's incompleteness theorem, which was published in 1931, is that there is a formal statement expressing the consistency of ZFC that is independent of ZFC. This consistency statement is of a metamathematical, rather than purely mathematical, character. Godel formulated large cardinals to provide a set-theoretic axiom which expresses the consistency of ZFC, and these are widely accepted as equally valid as ZFC.

The continuum hypothesis and the axiom of choice were the first independently interesting mathematical statements shown to be independent of ZF set theory. These independence proofs were not completed until Paul Cohen developed forcing in the 1960s.

The continuum hypothesis is equivalent to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well.

Arguments for and against CH
Gödel eventually came to believe that CH is false and that his proof that CH is consistent only shows that the Zermelo-Frankel axioms are defective. Gödel was a platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Godel believed for many years that the continuum was exactly $$\scriptstyle \aleph_2$$.

Cohen, though a formalist, also rejected CH. Since forcing showed that arbitrary collections of randomly chosen real numbers could be assumed to be as big as any ordinal, he thought that the real numbers, to the extent that the collection of real numbers are an ordinal, were as large as the largest ordinal.

In Godel's universe L, every real number is given by a rule of construction, although the algorithms might run for any ordinal number of steps. In universes where the reals are constructed by rules, the continuum hypothesis holds. In Cohen's universes, none of the new real numbers are produced by any predicative rule. So denying the continuum hypothesis imagines that there are many real numbers that are random or arbitrary. Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH.

Recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH (Maddy 1988, p. 500).

Another viewpoint is that the naive conception of set is not specific enough to determine whether CH is true or false. This viewpoint is supported by the independence of CH from the axioms of ZFC, since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.

At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement about probabilities. Freiling believes this axiom is "intuitively true" but others have disagreed. A recent argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000 (Woodin 2001a, 2001b). It is based on accepting the axiom of determinacy, an infinatory axiom about games which do not terminate.

Philosophical Significance
The reason that the continuum hypothesis is philosophically important is because, if it is true, it implies that set theory is more powerful than geometry.

When proving certain theorems, a geometric argument can quickly prove an arithmetic result, which is difficult or impossible to establish directly by induction. For example, using analytic continuation, De La Ville Pousson and Hadamard proved that the average density of prime numbers near N is 1/ln(N).

It is difficult to take the geometry out of this proof. If the theorem is unpacked into its component pieces, it relies on Cauchy's theorem. Cauchy's theorem in turn is proven by chopping a geometric region into many pieces of ever smaller sizes. Chopping a geometrical region is a procedure which can produce structures of very high complexity, since squares can be chopped into smaller squares which can be chopped further. To reproduce this type of proof in a discrete setting requires discrete objects with comparable complexity.

Cantor himself was analyzing dissections of the interval when he began his investigations. He felt that he had isolated the essential new construction which the real number line provides, the key which allows more powerful theorems--- it is the sequence of ordinals. Any collection of squares in the plane, or of intervals in the line, is described by an ordinal of some type.

Cantor was convinced that there was no impediment to going further, to producing extensions of geometry based on ever more infinite sets, and that this would prove theorems of greater complexity with greater ease. This was the seductive idea which made the hypothesis so important. If the continuum hypothesis is true, geometry is only $$\aleph_1$$. Who knows what new unexplored wonders lie at $$\aleph_{28}$$?

This intuition, rarely explicitly stated, was a central motivation for much of twentieth century mathematics.

Set Theory
In set theory, there are two different notions of infinite sizes.

Ordinal numbers
The sequence of ordinal numbers is defined by counting in two qualitatively different ways:


 * 1) increase an ordinal by one unit.
 * 2) For any infinite collection of ordinals, jump to the unique infinite ordinal which is bigger than all the elements of the sequence, but just so.

To start the induction, there must be a zero ordinal, which is 0. The first rule then forms other integers by steps, 1,2,3,4. Once the whole numbers are constructed, the second rule allows infinite ordinals to form. The first infinite ordinal bigger than the integers is called $$\omega$$, which is followed by $$\omega+1,\omega+2$$ and so on.

Cantor's intuition about ordinals came from analyzing increasing discrete collections of real numbers. An increasing sequence of reals can accumulate at a limit point P. If there are further real numbers past the limit point, these numbers are bigger than all the numbers that precede them. The next numbers in the ordered sequence along the line might continue increasing to a second limit point P', and the numbers further along might accumulate at another limit point P''. The sequence of limit points might themselves have a limit point. The theory of ordinal numbers systematized the kinds of linear orderings that could occur.

The most important thing about ordinals is that they obey the principle of induction. If a statement S about ordinals has the property that:


 * 1) It is true for the ordinal 0
 * 2) If it is true for all ordinals less than $$\alpha$$ then it is true for $$\alpha$$.

Then S is true for any ordinal. This property makes ordinals a powerful generalization of the integers.

Cardinal numbers
The cardinal numbers came from comparing sets of different sizes. The notion of cardinal number was defined by a procedure of matching items one-to-one.

Two sets are defined to be equinumerous or of equal cardinality if each element of one set can be matched with a unique partner in the other set with no elements left over. Two finite sets are equinumerous when they have the same number of elements.

For infinite sets, however, cardinality is more subtle. The set of integers seems to be bigger than the set of even integers, because only half of all the integers are even. But the map
 * $$ 1\rightarrow 2 $$
 * $$ 2\rightarrow 4 $$
 * $$ 3\rightarrow 6 $$
 * $$ \vdots $$
 * $$ n\rightarrow 2n $$

maps each integer to an even integer. Any set which can be written as a list can be mapped to the integers, since a list itself is such a map. The set of all pairs of integers, which seems to be much bigger than the set of integers, can be made into a list:
 * $$(1,1)(1,2)(2,1)(1,3)(2,2)(3,1)(1,4)(2,3)(3,2)(4,1) \ldots \,$$

by listing each pair $$(m,n)$$ with smaller value of $$(m+n)$$ earlier.

In this way Cantor proved that the integers were equinumerous with the rational numbers. He could also prove that the set of pairs of real numbers (the geometric plane) is equinumerous with the set of real numbers (the geometric line). Given any pair of real numbers (x,y), he could form a single real number z by interleaving the decimal sequences of x and y.

But Cantor also established that there were more real numbers than integers. Any discrete listing of real numbers will always be incomplete.

Two Different Uncountable Infinities
All the ordinals which are formed by Cantor's two procedures are countable, but Cantor considered the following ordinal:

$$\aleph_1 $$ the limit of all countable ordinals

This set is unreachable by countable induction from the two steps, but there is no reason to assume that it does not exist. Cantor felt that this set was philosophically necessary, because otherwise the system of ordinal counting was too limited to reach sets which are as big as real numbers.

So is $$\aleph_1$$ the same as the real numbers?

Real numbers can be thought of as infinite sequences of digits, and the digits could be binary. So the set of real numbers can be reconsidered as the set of all infinite sequences of 0's and 1's. Since there are two choices for each integer position, this set is denoted: $$2^{\aleph_0}$$

And the continuum hypothesis is stated as:


 * $$ \aleph_1 = 2^{\aleph_0}$$

Well ordering
It is not clear that a set as large as the real numbers has an ordinal description at all. Cantor's intuition for infinite sets included the unusual intuition that all sets can be matched one-to-one with an ordinal.

In modern set theory, this is a theorem whose proof requires the axiom of choice. The proof is as follows:

consider an arbitrary set R, and choose exactly one element $$\phi(S)$$ from each nonempty subset S of R. Inductively define the following sequence:
 * $$X_0 = \phi(R)$$
 * $$X_\alpha = \phi(R - \cup_{r<\alpha} X_r)$$

This defines a sequence by ordinal induction. This induction produces a one-to-one map from the ordinals to R. It can only terminate when R is exhausted.

To prove that R is exhausted requires a collection step. Consider the set of all the partial maps produced in this way, gather all of them into a set. The union of all these maps must be the entire set R, since otherwise it is easy to extend the union by one more induction step.

Two essential components of this proof are:
 * 1) The axiom of choice to choose the arbitrary element to produce the ordering
 * 2) The axiom of powerset to be able to produce a set big enough to collect together all the partial maps.

This theorem, proven by Zermelo following Cantor's intuition, is extraordinarily controvertial because ordering the real numbers in this way contradicts the following intuition:


 * Every subset S of the interval [0,1] has a Lebesgue measure.

The intuition for this non-theorem comes from picking real numbers at random. Intuitively, one could flip a coin for each successive digit to pick a real number between zero and one. The probability that this randomly chosen number is in S defines the measure of S. It is extraordinarily counterintuitive to imagine that there are sets which have no probability to land in them. It implies that the notion of choosing real numbers at random is fundamentally inconsistent.

By using the axiom of choice over the powerset of R, it is easy to produce sets which are not measurable. These sets should have probability zero of landing in them, but they cover the interval by countable disjoint translation.

Philosophically, this is very troubling. Forcing largely resolved these contradiction, since Cohen's method, as extended by Solovay, formalizes a notion of random picking which is free of contradiction.

Cohen's Undecidability Proof
Since Cohen's theorem is metamathematical, it requires a careful formalization of the intuitive notion of proof and theorem. Historically, this required a formalization of logical deduction and set theory. But the main ideas in Cohen's proof are largely independent of the particular logical system, and of the particular axioms for set theory.

Although classical mathematics developed from an intuition about infinite sets of ever larger cardinality, thinking about these sets as real mathematical objects when discussing metamathematics leads to intuitive paradoxes. For a metamathemtical discussion, it helps to take a point of view which is entirely formalist and mostly finitist. The infinite sets in mathematics are symbols which can be manipulated according to textual rules, and do not refer to infinite collections of any size.

But in order to prove theorems, some notion of infinity is convenient. The notion of a countably infinite collection can be thought of as absolute. The only sets whose properties will change will have cardinality larger than the integers.

Lowenheim Skolem/Godel Finiteness
A system of formal logic can be thought of as a computer prodgram which lists all deductions from a collection of axioms. The number of axioms is either finite, or countably infinite. The program is largely arbitrary, but all deductions must be correct and all deductions must eventually come out. An explicit deduction program was first established to have these properties by Kurt Godel.

In the course of running the program, every once in a while the program will produce a statement of the form

(there exists X) P(X)

At this point, introduce a new symbol for this quantity X.

The program sometimes will prove the theorem

(for all Y) (there exists X) P(X,Y)

In this case, introduce a new symbol in X for all elements so far listed in Y, and run a subroutine which will add symbols to X when symbols are added to Y. By definition, these symbols have the property P(X,F(y)).

When the axiom system proves that

X=Y

equating two previously introduced symbols, delete one of the two. This procedure generates a list of symbols which is alternately growing and shrinking, and will eventually produce a countable collection of symbols which form a model for the axiom system, that is every deduction from the axiom system is true in the model. A statement such as:

(there exists Y) (for all x) (there exists z) P(x,y,z)

is true of the symbols

(There exists a symbol Y) (for all symbols x) (there exists a symbol Z) P(X,Y,Z)

This is important, because symbolic models for a given axiom system are always countable, even if the axiom system includes ostensibly uncountable sets. This is an intuitive paradox if the axioms in question are set theoretical, because it is difficult to imagine a countable list of all real numbers. But the list of all speakable real numbers would be generated by this program. The predicatively defined reals, and these are all the real numbers which will ever be defined by finite sentences.

Whether this list is a complete list of all the real numbers, or an incomplete list is a philosophical question, which it is best not to ask. In the standard mathematical philosophy, the answer is that it is not the set of all real numbers, since most real numbers are not predicatively defined.

Forcing
Suppose now that the axiom system describes set theory, and therefore includes real numbers. It is possible to add axioms which assert the existence of real numbers between 0 and 1.


 * there exists a real R1
 * there exists a real R2
 * there exists a real R3

and this cannot lead to contradiction, even with infinitely many axioms. It will not lead to contradiction because there really are some real numbers between 0 and 1, and for all we know the new symbols are equal to those. As the computer program runs, and more symbols representing elements of various infinite sets are generated, it is possible to associate the symbols which have the property of being in (say) aleph-17 with one of these real numbers:

A_1, R_1 A_2, R_2

If there are infinitely many R's, it is then easy to see that the statement

(for all X in aleph 17) (there exists Y) Y is in R

can be made true in the model, in the sense that there is a map which is defined by induction over the deduction program which associates the k-th R_k with the k-th deduced element of aleph-17 (in the case that it doesn't disappear later because it got equated to something).

But the properties of these real numbers R are still vague, so there are many undecidable questions in this axiom system. For example, if the real numbers are all equal, this map would send all the members of aleph 17 to one point, and then this is not very much progress towards disproving the continuum hypothesis!

But suppose that these numbers are intuitively picked at random between 0 and 1. By flipping a coin to determine the binary digits. Then it is clear that the probability of two of these numbers being exactly equal is exactly zero. Since only countably many elements of aleph-17 are ever generated in any axiom system, with probability 1, the map from aleph-17 into the interval is one-to-one.

But this map, when the computer program runs forever, becomes a one-to-one map from aleph-17 into the real numbers. This is the first central idea of the proof. The number of symbols which model the axioms are always countable, so maps can be arranged between any infinite set and the real numbers by extending the axiom system to include new symbols which represent randomly chosen real numbers.

But the notion of choosing a real number at random is impossible to make precise in an axiomatic set theory with the axiom of choice. So Cohen's proof does not directly rely on randomness. Instead, cohen formalized the logical properties which are true for a real number chosen at random, and used this notion instead.

Cohen suggested that the digits of the real numbers should be specified one by one, just as would happen if a coin is flipped to determine the next digit. To specify the digits of the numbers, add very restricted axioms. The axioms are all of the form:

the first digit of R_1 is 1, the third digit of R_3 is 8, and so on.

He called such a collection of axioms a forcing notion. The forcing notion will restrict the real numbers to ever more precisely defined intervals as more digits are added. But it will never specify any of the numbers exactly. So nothing more will be known than what can be computed from the finitely many known digits. Such a system can never lead to a contradiction, since there always are real numbers which extend any finite digit specification.

A forcing notion A is weaker than forcing notion B when A specifies fewer digits than B, but A and B agree on all the digits they both specify. B is then stronger than A, or B is an extension of A. If A and B specify some digit differently, they are incomparable. The notion of weaker and stronger defines the partial order, and this is the formal definition of a forcing notion--- a partially ordered set.

A statement about the numbers in R is forced true in notion A, when it can be proven from the axioms in A, that is, when it can be proven from finitely many digits of the numbers in A. A statement is forced false in A when no extension of A forces it true.

This second idea is very important, because it defines a notion of false which is very close to the notion of false which people have about randomly chosen numbers. But it is not the same as logical false. For example, if a number is random and someone asks "is this number rational?" the answer is no. Since no number will ever be proven rational from finitely many digits, the definition of false means that every R is forced to be not rational.

This notion of false obeys most of the axioms of ordinary logic, with one exception. Just because the statement (not A) is forced false, does not mean that A is forced to be true. But it will never happen that both A and (not A) are both forced true or both forced false.

A infinite sequence of extensions of a forcing notions can be chosen to eventually converge on the answer to all questions. The reason is simple, if a statement isn't forced to be false that's because there's an extension which makes it true. Inductively extending statement by statement will produce a sequence of forcing notions which decides every question. This is called a complete sequence.

The main forcing theorem is this: in any finite system of axioms describing the real numbers, it is consistent to add axioms which describe infinitely many new real numbers R and to consider as true all those statements which are forced true in a complete sequence. All statements which are forced false are false. Then every statement is decided.

The notion of forcing formalizes the idea of randomness, without introducing probability concepts. Such a real number is called a generic real number.

Countable Chain Condition
The notion of forcing allows you to introduce an infinite set of real numbers which are in one-to-one correspondence with aleph-17. Introduce a new real number symbol for each of the (countably many) elements of aleph-17. Then choose them to be generic. By construction, there is now a map from aleph-17 to the real numbers and it is one-to-one.

But this is a second system of axioms. The question is whether the aleph-17 in the new system is the same aleph-17 in the old system, or whether it is now a different aleph. This is the main difficulty in the proof. Ensuring that the new real numbers which are introduced never collapse aleph-17 to a lower cardinal.

To see that this can be a problem, consider introducing generic integers by the same method. Add a list of axioms
 * there exists an integer R1
 * there exists an integer R2
 * there exists an integer R3

and define a map from aleph17 into the integers. The forcing notion will specify the digits of the integers one by one, going up now, and will force them all to be different. Then in this new model, $$\aleph17$$ has been mapped one-to-one to the integers.

Is this a contradiction in mathematics? Of course not! It just means that in this model the set which used to be $$\aleph_{17}$$ is now countable. The map which is constructed by the forcing procedure contains enough information to show that $$\aleph_{17}$$ in the old model is countable.

How can you prevent this from happening?

To do this, Cohen introduced a condition on the partial order of forcing conditions--- the countable chain condition or ccc. The ccc states that every antichain of forcing notions is countable, meaning that every collection of pairwise incomparable forcing notions are countable. In the case that the forcing maps uncountable sets into uncountable sets, it only does so by introducing a new map between Z and Z.

But then the notion of cardinality does not change. The reason is that every symbol in the model for the new axiom system can be considered as an object in the old model with an additional map added which defines the complete sequence. But the complete sequence can be thought of as a choice from each antichain of one element. This will map the (countably many) antichains into one of countably many integers.

Using this nonconstructive object, all other objects will be constructibly defined. But mapping this object will never produce a map between different cardinalities, because it is known to be just a nonconstructible map from Z to Z, even to the old model.

So in Cohen's model, the real numbers have been injected with $$\aleph_{17}$$ without any collapse.

Forcing The Continuum Hypothesis
To force the continuum hypothesis to be true, match every real number to a generic countable ordinal.