Talk:Mathematical induction/Archive

dissapearing
I know many people have trouble understanding that where does the "ordinary" m "disappears" in the example (and the manipulation is done on m + 1). Should that be clariefied?

How about making a case where induction fails and dissecting that? ---

This is my first visit to this website. It is amazing. I've never seen anything this great on the internet.

My request: Can you guys put up some more example problems and solutions? It would really be great for CS and Math majors.

Can I make the following suggestion? The reasons for the changes are as follows
 * 1) I want to include 0 with the natural numbers. (The "first n integers", by the way, is not really correct since the integers also contain negative numbers)
 * 2) the notation A= {Ai: i=1,2,3,...,n} is not trivial. I know that you are building a proposition here by taking the conjunction of an infinite number of propositions, but I am not sure if that is immediately clear to anyone. So below is my suggestion. -- Jan Hidders


 * Replacing a well understood word like proposition with a jargon word like "predicate" reduces understanding.


 * Both notations {Ai:...} and P(n) aren't necessary. Basic concepts like induction need to be described in common, non technical, language, or the lay reader will never learn anything.

I'm not so sure that the term proposition is not jargon. It doesn't look like it but it has a quite specific meaning in logic. But I agree that everything should be as jargon free as possible. So I've adapted my version below a little. Let me know what you think. And, as usual, if you think some more explanation should be added, please do. -- JanHidders

Mathematical induction
Mathematical induction is the standard way of proving that a certain statement about a number n holds for all natural numbers. Such a proof consists of two steps:
 * 1) Showing that the statement holds for the number 0.
 * 2) Showing that if the statement holds for a number n then the same statement also holds for n + 1.

Heres an alternative version of the example, which I think is easier for the lay reader... Comment? GWO

Yes, better. Although I would like to have spaces around '+' and '='. And I'm wondering why you don't change the definition in a similar way. Then you get "showing that the statement holds for n = 0" and "showing that if the statement holds for n = m then the same statement also holds for n = m + 1. -- JanHidders

Re: Spaces. You're right, and I think you're right about the definition too. Do you want to make the changes on Mathematical induction itself? GWO

I will do it. Thank you for helping. -- JanHidders

Example:

Suppose we wish to prove the statement that the sum of the first n natural numbers, that is 1 + 2 + 3 + ... + n, is equal to n(n + 1)/2.

Above and below you've switched to including 0 as a natural number, but here you don't include it? Perhaps you mean "...that is 0 + 1 + 2..."? But then the list is of the first n+1 natural numbers, so that would have to be patched. Wwwayne 17:27, 17 October 2007 (UTC)

The proof that the statement is true all natural numbers proceeds as follows:

m(m + 1) 1 + 2 + ... + m = 2                                         m(m + 1) 1 + 2 + ... + n + (m + 1) = + (m + 1) 2                   m(m + 1)   2(m + 1)   (m + 2)(m + 1) =  +  = --                       2          2              2                                      ((m+1))((m+1)+1) 1 + 2 + ... + (m + 1) = 2
 * (1) Check it is true when n=0. Clearly, the sum of the first 0 natural numbers is 0, and 0(0 + 1)/2 = 0. So the statement is true when n=0.
 * (2) We have to show that if the statement holds when n=m, then it holds when n=m+1. This can be done as follows.
 * Assume the statement is true for n=m holds, i.e.,
 * Adding m + 1 to both sides gives
 * By algebraic manipulation we have
 * Thus we have
 * So the statement is true for n=m+1.
 * (3) By induction we conclude that the statement holds for all natural numbers n.

i see no logical conclusion here you assumed a statement and proceeded to add equal terms to both sides, and then somehow concluded that the whole shebang works? this seems to me like assuming x=x, adding one to both sides, and calling x+1=x+1 a proof of something. all it proves is that the quantity you added to both sides are equal

Your version

Example:

Let P(n) be the statement that the sum of the first n natural numbers, that is 1 + 2 + 3 + ... + n, is equal to n(n + 1)/2. The proof that shows that P holds for all natural numbers proceeds as follows.

n(n + 1) 1 + 2 + ... + n = 2                                         n(n + 1) 1 + 2 + ... + n + (n + 1) = + (n + 1) 2                   n(n + 1)   2(n + 1)   (n + 2)(n + 1) =  +  = --                       2          2              2                                      (n + 2)(n + 1) 1 + 2 + ... + (n + 1) = -- 2
 * (1) P(0) says that the sum of the first zero natural numbers equals 0(0 + 1)/2. This is true since 0(0 + 1)/2 = 0.
 * (2) We have to show that if P(n) holds then also P(n + 1) holds. This can be done as follows.
 * Assume that P(n) holds, i.e.,
 * Adding n + 1 to both sides gives
 * By algebraic manipulation we have
 * This shows that P(n + 1) holds:
 * (3) By induction we conclude that P(n) holds for all natural numbers n.

I'm wondering if this definition of "mathematical induction" applies to the proofs by mathematical induction that I remember doing in logic. --LMS

Well it certainly should do ... what do you remember about them? GWO

The term is often generalized to not just the natural numbers but anything that is countable. So you can prove something for all integers greater than 6 or for every string that is described by a certain syntax. Sometimes the schema is generalized to (1) prove for n=0 (2) show that if it holds for all m <= n then it holds for n+1. All those generalizations can be reformulated so that they fit the current definition, but that is something that probably needs a little explaining. Is that what you mean? -- JanHidders

That is precisely what I mean, Jan, thanks. Essentially, mathematical induction is a way of making a proof recursive, which is something that is used in logic when the proofs are not about numbers per se. --LMS

I have removed the "proof" of mathematical induction, which I believe to be only slightly better than the "domino effect" argument. The principle of mathematical induction is always stated as an axiom (e.g. Peano's system), a proof seems to be unnecessary. --Wshun

Someone has put down the following three terms:


 * Proof of method Mathematical induction is taught in schools, and most students learn it by rote without fully understanding how it works. It is possible to base a proof of mathematical induction on other mathematical principles.

But one of those mathematical principles is EQUIVALENT to the principle of mathematical induction!


 * Related methods An induction variant is used in computer science to prove that expressions which can be evaluated are equivalent, and this is known as structural induction.

''Hmm, the related article is not wikified. Anyway, it could be put back later.''


 * Alternative Formulations An alternative formulation of the principle of mathematical induction is the Well-Ordering Principle, which states that any nonempty set of natural numbers must have a smallest element. This principle is equivalent to mathematical induction, as follows:


 * Suppose that the well-ordering principle is true, and that the conditions of the proof by induction are true, namely, that for some property P, we know that P(0) is true, and that if P(n) is true for any natural number n, then P(n+1) is also true.  We wish to prove that P(n) is true for all natural numbers n.  Let S be the set of natural numbers for which P is not true.   Let us suppose that S has a smallest element, say m.   m cannot be 0, because P(0) is true by hypothesis.   So m-1 is a natural number, and in particular m-1 is not an element of S, because m is the smallest element of S.  So P(m-1) is true, and by hypothesis P(m) is also true, and we have a contradiction.  Thus we conclude that S has no smallest element, and so by the well-ordering principle, S must be empty.  P(n) therefore holds for all natural numbers n.


 * Conversely, we can prove the well-ordering principle inductively. Let S be a set of natural numbers.  We want to prove that either S has a smallest element or else that S is empty.   Let P(n) be the statement that no element of S is smaller than n.  P(0) is certainly true, since there is no natural number smaller than 0.  Suppose that P(n) is true for some n.  If P(n+1) were false, then S would have an element smaller than n+1, but it could not be smaller than n, because P(n) was true, and so S would have a minimal element, namely n, and we would be done.  So P(n) implies P(n+1) for all n, or else S has a minimal element.  But if P(n) implies P(n+1) for all n, then by induction we know that P(n) is true for all n, and therefore for all n, no element of S is smaller than n.  But this can only be vacuously true, if S has no elements at all, since every natural number is smaller than some other natural number.  Thus we are done.


 * An analogous proof holds for any well-ordered set; see transfinite induction.

Is it too technical to be placed on Wikipedia?

--wshun 04:21, 10 Aug 2003 (UTC)


 * Regarding only the Alternative Formulations section: I didn't think it was too technical when I put it in. I also think it would be illuminating for a certain audience---people who are familiar with the idea of induction, but who haven't studied set theory yet.  It seems strange to me that you decided to  remove this section when you yourself weren't sure if it should be removed.


 * --Dominus 06:46, 10 Aug 2003 (UTC)


 * Oh, it is my strange style :P I like to remove materials that I am not sure if they should be added to the article. BTW, now we have proof of mathematical induction and three forms of mathematical induction, we can move something here and there to reduce redundance. -- wshun 23:23, 10 Aug 2003 (UTC)

Nothing is too technical for Wikipedia provided all the necessary prerequisites are also on Wikipedia! Phys

This section looks odd to me: Proofs by transfinite induction typically need to distinguish three cases:
 * 1) m is a minimal element, i.e. there is no element smaller than m
 * 2) m has a direct predecessor, i.e. the set of elements which are smaller than m has a largest element
 * 3) m has no direct predecessor, i.e. m is a so-called limit-ordinal

The first point says there is no element < m, yet the second implies there is a element smaller than m. Also, the third one contradicts the second: "m has a direct predecessor'", "m has no direct predecessor"


 * Of course they contradict each other; that's why they are distinct cases that are distinguished from each other!! Michael Hardy 00:42, 30 Mar 2005 (UTC)

problem of induction
Is the mathematical induction suscpetable to the problem of induction? Why yes/not?

First, to answer this question we must look at this in a very broad point of view. Think that science and mathematics are branches of history; science is the study of our interpretation of past observations to try to predict future events; mathematics is the study of patterns, or, loosely, history of numbers. Now, because we know for certain what we do and don't know (i.e., postulates, theorems, properties, ect.), we can 100% correctly anticipate what will happen mathematically, based on those theorems. If you use only what you know for certain, you shall never run up into the problem of induction.


 * The above is extraordinarily philosophically POV, to say the least. That's fine on a talk page, but it should also be mentioned that mathematical induction is a form of deductive rather than inductive reasoning. Michael Hardy 02:25, 30 December 2005 (UTC)

I don't see the problem here: the answer is no. Mathematical induction works when we want to demonstrate a statement involving quantification over sets that can be given a recursive characterisation. The problem of induction arises when we want to demoinstrate such statements where there is no such mechanism for covering its totality. if you really want to make a problem, looking at Wittgenstein's puzzle's over rule following would be the right sort of place to look, but I'd say it's not the same problem at all as Hume's problem. --- Charles Stewart 02:47, 30 December 2005 (UTC)
 * Strange. I remember someone once telling me that there's a big problem with the way we reach conclusions with mathematical induction. I was sure that he was referring to the problem of induction. Guess it was something else. CommandoGuard 17:26, 31 December 2005 (UTC)

Proposed reorganization
This article's description of strong induction is longer than the strong induction article's! Also, there are a few more variants of induction that this article could mention, especially if it absorbs the insights from Three forms of mathematical induction. So I propose the following organization:
 * intro: keep (it's a method of math proof; generalizes to well-founded structures; deductive, not inductive; first used by Maurolico)
 * Description of basic method: keep (basis step, inductive step, domino analogy)
 * Example: keep (1 + ... + n = n(n + 1)/2)
 * Variants commonly used: in practice, induction proofs may be organized differently
 * Starting at b
 * Double induction: induction on two counters, n and m (and, possibly, even more)
 * Induction on binary functions: this is where we absorb maybe two of the examples from Three forms of mathematical induction

Also, I vigorously vote that we start the natural numbers at 0; this is common practice in logic, which is the domain of this article. Joshuardavis 13:46, 9 March 2006 (UTC)
 * Validity of mathematical induction on natural numbers: axiom of natural numbers, equivalent to well-ordering axiom
 * Strong induction: absorb strong induction; mention transfinite induction; explain generalization to well-founded structures
 * This sounds good. I wonder if it makes sense to have induction on several counters in the middle but not general well-founded structures until the end; I guess it works as long as the middle bit isn't too long. As for starting at 0, I have no strong opinion, but I note that it would be consistent with Recursion. Melchoir 01:21, 10 March 2006 (UTC)

For the purposes of working it into this article, I am including my reformulation of Michael Hardy's disputed Three forms of mathematical induction article here. Also, to give credit where it is due, I would like to acknowledge that Ryan Reich mentioned this interpretation before I did. Joshuardavis 19:16, 12 March 2006 (UTC)
 * Okay with me. I'll add the magic tags. Melchoir 21:20, 12 March 2006 (UTC)

I'd like to first thank Joshuardavis for so persitently giving me credit for this idea, although he's the one who takes all the responsibility for promoting it. Anyway, I was thinking about the most natural way to present this "induction on a binary operation", and here's what I think: In order to remove the stain of the appearance of Original Research, we should not dwell that much on the point, and not go to great lengths to present it as a deep mathematical principle. I think the proper place is as a sub-subject of a discussion of recursive definitions, which is a deep principle, but also a well-researched one. Ryan Reich 04:18, 13 March 2006 (UTC)
 * At the lowest level, this technique covers proof by induction of formulas which are "true by definition"; here you should have in mind the $$1 + \dots + n = {n + 1 \choose 2}$$ example. One attempt at a general form of this is that you are given a binary operation $$B(x, y)$$ and define an n-ary operation $$B_n(x_1, \dots, x_n) = B(B_{n - 1}(x_1, \dots, x_{n - 1}), x_n)$$.  Then the theorem to prove is that given a function $$f(n)$$ and a sequence $$a_1, a_2, \dots$$, we have $$B_n(a_1, \dots, a_n) = f(n)$$.  The entirety of the proof is to compute that $$B(f(n - 1), a_n) = f(n)$$ for all n.  This is true by definition in the sense that $$B_n(a_1, \dots, a_n)$$ represents an "unsimplified" operation on the a's, which if it were simplified "as it is computed", would at each stage give $$f(n)$$.  I think this could be even more absurdly generalized if you really tried, but it's not helpful.
 * What this argument is really getting at is that we define some (all, actually) functions on the natural numbers "by induction", saying something like, "Let $$f(1) = 1$$, and for each n, assuming we have defined $$f(n - 1)$$, let $$f(n) = f(n - 1) + n$$". The fact that this can be done is not entirely obvious, and for example, Munkres' Topology devotes a whole section, Chapter 1 Section 8, to discussing how it can be done.  In particular, what I said above is his Lemma 8.2 (all numbers are from the Second Edition).  Therefore this "induction on binary operations" properly belongs as a subset of a discussion on recursive definition, which itself belongs as a section of mathematical induction.  This is something I indicated in my second response to Mr. Hardy at Wikipedia talk:Articles_for_deletion/Three_forms_of_mathematical_induction.
 * Mr. Hardy's original point was that 2 is a special number. In light of the general framework I have set out here I think this is incorrect, and that if B were replaced by a trinary function T and the definition of B_n suitably modified, you could say the same thing.  I think that if we are going to give this subject its proper due we should excise the reliance on the number 2; a very good example would be to mention a proof by induction (rather than by the methods of difference equations, the point being that the Fibonacci sequence is defined by a recurrence relation using two, rather than just one, of the previous terms) of the well-known formula
 * $$F_n = \frac{\phi^n - (-\phi)^{-n}}{\sqrt{5}}$$, where $$F_n$$ is the nth Fibonacci number


 * On your second bullet: recursion is important, but shouldn't recursive definitions be elaborated upon at Recursion? Melchoir 04:31, 13 March 2006 (UTC)


 * Sure looks like it. As they say, I'm not as familiar with the literature as I should be.  Given that, let's include the examples here and make reference to the theory there. Ryan Reich 04:41, 13 March 2006 (UTC)


 * I agree that extensions of this idea to trinary, 4-ary, etc. functions are possible and could be mentioned, but they do not deserve a worked example in this article, because virtually all functions encountered by the common person are unary or binary. My vague opinion is that Mathematical induction (being a proof technique required of many students) should remain relatively practical and example-driven (except for mentioning the equivalence with well-ordering), while Recursion (being an underlying concept with many manifestations) should contain the more conceptual and philosophical material. See the logistics section below. Joshuardavis 16:21, 13 March 2006 (UTC)

Special significance of the n = 2 case
In mathematics, many standard functions, including operations such as + and relations such as =, are binary, meaning that they take two arguments. Often these functions possess properties that implicitly extend them to more than two arguments. For example, once addition a + b is defined and is known to satisfy the associativity property (a + b) + c = a + (b + c), then the trinary addition a + b + c makes sense, either as (a + b) + c or as a + (b + c). Similarly, many axioms and theorems in mathematics are stated only for the binary versions of mathematical operations and relations, and implicitly extend to higher arity versions.

Suppose that we wish to prove a statement about an n-arity operation implicitly defined from a binary operation, using mathematical induction on n. Then it should come as no surprise that the n = 2 case carries special weight. Here are some examples.

Product rule
In this example, the binary operation in question is multiplication (of functions). The usual product rule taught in calculus says


 * $$(fg)' = f'g + g'f.\,$$

For a product of n functions, one has


 * $$(f_1 f_2 f_3 \cdots f_n)' \,$$


 * $$= (f_1' f_2 f_3 \cdots f_n)

+ (f_1 f_2' f_3 \cdots f_n) + (f_1 f_2 f_3'\cdots f_n)+\cdots +(f_1 f_2 \cdots f_{n-1} f_n').$$

In each of the n terms, just one of the factors is a derivative; the others are not.

The n = 1 case is trivial ($$f' = f'$$), and the n &gt;= 3 cases are easy to prove from the preceding n &minus; 1 cases. The real difficulty lies in the n = 2 case, which is why this is the one stated in the standard product rule.

Pólya's proof that there is no "horse of a different color"
In this example, the binary relation in question is equality, = (applied to color of horses). In the middle of the 20th century, a commonplace colloquial locution to express the idea that something is unexpectedly different from the usual was "That's a horse of a different color!". George Pólya posed the following exercise: Find the error in the following argument, which purports to prove by mathematical induction that all horses are of the same color:


 * If there's only one horse, there's only one color.
 * Suppose within any set of n horses, there is only one color. Now look at any set of n + 1 horses.  Number them: 1, 2, 3, ..., n, n + 1.  Consider the sets {1, 2, 3, ..., n} and {2, 3, 4, ..., n + 1}.  Each is a set of only n horses, therefore with each there is only one color.  But the two sets overlap, so there must be only one color among all n + 1 horses.

Again the n = 1 case is trivial (any horse is the same color as itself), and the inductive step is correct in all cases n &gt;= 3 cases. However, the logic of the inductive step is incorrect when n = 2, because the statement that &quot;the two sets overlap&quot; is false. Indeed, the n = 2 case is clearly the crux of the matter; if one could prove the n = 2 case, then all higher cases would follow from the transitive property of equality.

Triangle inequality
This example is somewhat more complicated, because it involves a binary operation, +, and another binary function, the distance function d in a metric space. The usual triangle inequality in metric spaces says


 * $$d(x_0, x_2) \le d(x_0, x_1) + d(x_1, x_2).$$

For a sequence of n + 1 points, one has


 * $$d(x_0, x_n) \le d(x_0, x_1) + \cdots + d(x_{n-1}, x_n).$$

Again the n = 1 case is trivial:


 * $$d(x_0, x_1) \leq d(x_0, x_1).$$

The first nontrivial case occurs when n = 2; this case is the crux of the matter, which is why the triangle inequality deals with it. The cases n &ge; 3 then follow inductively from the corresponding n &minus; 1 cases.

Logistics of proposed merger
One problem with merging this material into mathematical induction is that that article has so pitifully few worked examples. That should get taken care of first. Alternatively one could create a separate article about Polya's example, and emphasize that although Polya phrased it as an error to be located, it is a valid argument as applied to some problems. Michael Hardy 01:38, 13 March 2006 (UTC)


 * You raise a good point: How many worked examples should this article offer? How long and textbooky should the article be? In my opinion, it should do
 * a basic example such as 1 + ... + n = n(n+1)/2
 * either the product rule or triangle inequality example (but not both)
 * the Polya example (because it's famous and of the same form but wrong -- very instructive)
 * an example of strong induction (Fibonacci?)
 * The starting at b, multiple induction (on indices n and m), induction on n-ary funcs for n > 2, etc. should be mentioned but do not deserve their own examples. Perhaps I am being too minimalist, but I feel that too many examples will overwhelm the reader and distract him from the really important ones. Joshuardavis 16:32, 13 March 2006 (UTC)


 * By the way, Fibonacci number already has the strong induction proof of the closed formula. So I guess we pick a different example or cite that one? Joshuardavis 17:45, 13 March 2006 (UTC)

Fibonacci does not seem to me to be a good example for strong induction, because it's not using the full strength of the hypothesis that the proposition holds for all values less than n. Michael Hardy 03:29, 14 March 2006 (UTC)


 * I fully agree. Can anyone think of a standard, interesting example of induction using all previous n? Joshuardavis 16:10, 14 March 2006 (UTC)

If you're talking about worked examples in the early part of the article explaining how to write proofs by mathematical induction, I think you're right that we should not include both the triangle inequality and the and the product rule. But if you're talking about the necessarily later section that would include the ideas from three forms of mathematical induction, then Polya's argument is not a concrete example, but a statement of the general form. There should be more than one concrete example in that section. Michael Hardy 03:47, 14 March 2006 (UTC)


 * Per my proposed reorganization, I believe that the &quot;early part of the article&quot; should have exactly one example, 1 + ... + n = n (n + 1) / 2. The later section, currently called &quot;Building on n = 2&quot;, should have the product rule (say) and Polya's example.


 * Actually, I feel that there should be only one example of any one concept (except for Polya). This is an encyclopedia, not a textbook. But if you and other editors strongly want more examples, then I will surrender that point.


 * I still don't understand your philosophical view that Polya's argument is not an example but somehow a description of the entire Second Form. It is an example, in that it deals with a particular binary function (=), while other examples deal with other particular binary functions (+, x, etc.). Of course you are right that it illustrates the general idea, but so does the product rule example. This is what it means to be an example &mdash; that it has particulars not true of all examples, but nonetheless illustrates the concept. (Perhaps I'm simply misunderstanding you?) Joshuardavis 16:10, 14 March 2006 (UTC)

Now that the article is clearly under construction, with new sections containing only merge tags, I'm going to go ahead and start putting material into the sections. Since I have not asked ahead of time, I won't be offended if others start reverting it. But, please, let's actually start implementing some of these ideas. Joshuardavis 16:10, 14 March 2006 (UTC)

"Start at b" is obviously the wrong section for merger
Why in the world would anyone choose the section titled "start at b" to merge three forms of mathematical induction into. That is absurd!! Anyone who thinks the essense of the matter is starting at somewhere unconventional is totally clueless. Michael Hardy 03:38, 14 March 2006 (UTC)
 * And your better idea is...? Melchoir 03:45, 14 March 2006 (UTC)

Firstly, my better idea is don't put it into a section to which it is not relevant. The "start at b" section is about a trivial point. It's also about a point that has no bearing, as far as I can see, on the material in three forms of mathematical induction. Three forms of mathematical induction is a more advanced section that needs to presuppose that the reader knows very well how to write proofs using mathematical induction. "Start at b", on the other hand, is about a simple point and is addressed to people learning the subject for the first time. One of those who suggested a "merge into" pointed out that more advanced material typically comes later in the article.
 * The mergeable content of Three forms of mathematical induction consists of the case when the induction step relies on the n=2 result. It is not advanced, and it is closer to the original presentation of induction than complete/transfinite induction. I'll create a new section for it, if that's what you're suggesting. Melchoir 06:02, 14 March 2006 (UTC)

What is "advanced" is the reason why this form is treated separately: the "binary character" of the identity relation, i.e. if any two horses are of the same color, then all horses are of the same color. Trivial when stated that way, but as applied to classification of forms of mathematical induction into three classes, it should generally be addressed only to those who already know what mathematical induction is and how to use it. Michael Hardy 22:05, 14 March 2006 (UTC)

"motivation"
The newly added "motivation" section, which I've deleted and am pasting below, has problmes. Firstly, what are "traditional proof approaches", if those don't include mathematical induction? Secondly, what is an "absolute property"? This section is mainly devoted to an example. Perhaps it should be listed among examples. Michael Hardy 01:50, 10 April 2006 (UTC)


 * PS: I've changed the typesetting style to conform to conventional Wikipedia usages in mathematics articles. Michael Hardy 01:50, 10 April 2006 (UTC)


 * More traditional proof methods include syllogistic arguments and proofs by contradiction. If you had a problem with that phrasing, you could have just revised the phrasing to your whims. An absolute property is exactly what it sounds like: a non-relational property; a property which an object intrinsically, as opposed to extrinsically, has.
 * The motivation is indeed an example. It is a very simple one, suitable to work as a motivation. That's what a motivation is; it's a simple example given to motivate the development of some sort of technique or reasoning.
 * If that's all, you might put it back now? The page is clearly in need of some form of motivational example; someone who doesn't know what induction is will most likely not be able to figure it out easily from the page as it stands. Drostie 21:34, 20 April 2006 (UTC)


 * My two cents: Yes, the page is sorely in need of work. Thanks for taking a stab at it, Drostie. Your proposed Motivation section is at least as good as the current Formal Definition/Example section, although I would like it to be more concise and less conversational. That said, does it do anything that the example in Formal Definition does not? That too is pretty gentle (though your version is more motivational).


 * Another two cents: In my mind mathematical induction is certainly traditional. I don't understand "absolute property"; does it mean a statement that holds at the top level of the deduction, not under any hypotheses? Your explanation of this point above does nothing for me. Joshua Davis 22:11, 20 April 2006 (UTC)

Mathematical induction is in a sense traditional. But if you don't consider it so, consider that


 * The definition of the word "traditional" in this context is is incredibly vague if not made explicit; and
 * It does not make sense to say that the reason for the use of this method is simply that some other methods don't work (unless some explicit account of why that would make sense is given).

Michael Hardy 22:46, 20 April 2006 (UTC)

Motivation

 * This section needs cleanup, but to conform to notational conventions and in content.

Often, in mathematics, there are observations which seem to defy traditional proof approaches. Take, for example, the claim that "the sum of the first n odd numbers is n2." You can verify it by hand for the first few n: 1 = 1 = 12, and then 1 + 3 = 4 = 22, and then 1 + 3 + 5 = 9 = 32. But it's not exactly clear how to prove it for all positive whole numbers n.

A proof by induction seeks to show a certain absolute property of the claim itself. This property is that "If the sum of the first n odd numbers is n squared, then it is also the case that the sum of the first n + 1 odd numbers is (n + 1)2."

This property is devilishly easy to show. If, indeed, the sum of the first n odd numbers were n2, and the (n + 1)th odd number is 2n + 1 (which it is), then the sum of the first n + 1 odd numbers is n2 + 2n + 1. And it's simple algebra to see that this number, in turn, is (n + 1)2.

The absolute property that we've shown -- which is "If the claim is true for n, then it's true for n + 1 as well," -- is something like a method to climb up a ladder. If you've got your feet at some n which is on the ladder, this property lets you go one step higher, to n + 1, on the ladder.

Now, we already verified the original claim for the first three cases. We could verify it for n = 4, by hand, as well (1 + 3 + 5 + 7 = 16), but why bother? Instead of doing that sum for n = 4, why not just use the property that we just proved? Our claim was true for n = 3, so it must be true for n = 3 + 1 = 4. And since it's true for n = 4, it must also be true for n = 5. And since it's true for n = 5, it must be true for n = 6. In fact, if you are given any finite counting number m, you can go through this sort of chain to "climb up the ladder" from 1 to m, and prove that this property is true for m as well.

Just by proving two claims: "It's true for n = 1", and "If it's true for n, then it's true for n + 1," we've managed to finangle a proof that works for every finite counting number. And we don't need to go through those laborious sums to prove it.

Proving induction
There has been a small flurry of recent edits to the proof section which has caused me to become concerned for the integrity of the article. It would seem that the prospect of editing a genuine proof has evoked in some people the idea that playing mathemtician with it is a good idea. I think the last three edits (as of 14:48, 6 September 2006 (UTC)) show that this is not the case.

Look at the section as it existed when I merged it. It was straightforward, though as the subsequent edits of JRSpriggs suggest, perhaps not optimally worded. There was a nice separation between the "weak" and "strong" inductive cases and I didn't try to use one to prove the other. I think that the presentation as I had it was better for a novice (many of which must come to this page) than it's turned out in subsequent versions, especially the present one.

I think the last three edits have been a step backwards. First of all, some enterprising soul (known only by IP address) decided that weak induction should be proved for the integers from strong induction. Okay, it's elegant. It's not beautiful, though (see Mathematical jargon for the quote) because it clouds the point of what weak induction is about. People who like to think of induction as a domino effect (i.e. novices who visit this page often) will not be enlightened by learning that it is in fact a special case of transfinite induction. Not only will they likely not have heard of the latter, but they will be confused by the logical nitpicking that we use to eliminate the "trivial step" for strong induction. Furthermore, as a result of this edit, yet another axiom was tacked onto the list. I would like to point out that this is unnecessary, since it essentially states the explicit definition of the order with respect to which the natural numbers are well-ordered (as in the first axiom). It also confuses the novice with more logicalisms; ey will wonder what hidden reason we might have for stating something so obvious as that n is less than n + 1!

As I was writing this I also began to wonder, as was already discussed in the AfD for the old proof article, why exactly we need this proof. I defended it then, but it seems to serve an ambiguous purpose: when in mathematics do people actually use it? We could write down lots of equivalences that are not useful. My opinion is that this proof is relevant in exactly one situation: when the natural numbers are defined "top-down" as a subset of the real numbers (as the intersection of all sets closed under addition and containing 1). In that case, we are given an ordering satisfying n < n + 1 for all n which is a well-ordering (provable without induction), but induction is not clear. Then it must be proved, using the technique in this article. The other construction, by Peano axioms, of course includes induction and therefore this proof is irrelevant.

Concerning the dependence between "weak" and "strong" induction...the names are misnomers. They are equivalent principles for the natural numbers. "Weak" should not be derived from "strong" because, as I said above, it is confusing. It is also philosophically and historically different: once weak induction is noted, strong induction may be derived, and then generalized to transfinite well-ordered sets, where it is the only induction.

However, everything I've said here about mathematical and historical reasons is just opinion, and if there is anything this proof suffers from it is a mess of different opinions. My vision for it is that it should emphasize the "top-down" relevance, excise the dependence of weak and strong induction, and remove the formal logical flavor that characterizes it now (what with the list of axioms). However, this really ought to be sourced from somewhere, so that it becomes more than just opinion.

My $6.65. Ryan Reich 14:48, 6 September 2006 (UTC)


 * This proof requires at least three assumptions:

1. The set of natural numbers is well-ordered. 2. Every natural number is either zero, or n+1 for some natural number n.   3. For any natural number n, n+1 is greater than n.
 * My main concern is that the third assumption has been slighted by other writers. It is necessary to tie the ordering mentioned in the first assumption to the successor function mentioned in the second assumption. Without it, one can construct models which satisfy the first and second of these assumptions and also all of Peano's postulates other than induction, and yet does not satisfy mathematical induction. Specifically, one can use the polynomials in one variable with integer coefficients for which the leading non-zero coefficient (if any) is positive. Give zero, successor, addition, and multiplication the usual meanings on those polynomials. Assign Gödel numbers (actual natural numbers) to these polynomials; and let the ordering be the result of applying the usual ordering of the natural numbers to the Gödel numbers. Then this model is well-ordered with order-type &omega; and it satisfies all of Peano's postulates EXCEPT the axiom schema of induction. However, it does not satisfy the third assumption listed above. JRSpriggs 05:31, 7 September 2006 (UTC)


 * I suppose I'll be "playing mathematician" by actually having a conversation about this, but here's my feeling. What you have said is, of course, correct, but in a sense, it is unreasonable because of the way the first axiom is written.  Technically, of course, saying that "The natural numbers are well-ordered" could mean that they have any old well-ordering, but the intent was clearly to refer to the natural ordering induced by succession.  My point is simply that anyone not steeped in mathematical logic wouldn't notice the difference, and listing axiom 3 separately is awkward.  Your goal could equally well be accomplished by rephrasing axiom 1 to specify the ordering, which makes it flow better.
 * This was probably the least of my complaints, though. My overall problem with this proof is that it's clearly become a personal project for a number of people who all see it slightly differently, and as a result it's lost any elegance of prose or mathematical beauty it might once have had.  Every sentence must contain its precise assumptions and deductions, and all ideas must be dismembered into their atomic constituents, stated in separate sentences.  Furthermore, this encyclopedia is not a collection of proofs, but I don't think we really understand yet why we've included this one.  I argued in the AfD that it's an important point, but I don't think any of its importance is emphasized in the article.  Ryan Reich 13:18, 7 September 2006 (UTC)


 * When I saw that 64.42.233.61 had completely rewritten my proof, I was almost unable to resist the urge to just revert him out-of-hand. But I realized that I was angry just because I thought of my version as my creation, which does not mean that it is necessarily better. So I waited until I had calmed down and carefully read his proof. While he did not cover as many details as I did, he did give a more direct proof and one which is probably easier for most people to read. So I limited myself to just adding back the assumption which he used, but did not acknowledge. Actually when I had rewritten the proof earlier, I had put in more assumptions than were necessary. So I trimmed it down to what was necessary.
 * Of course, I realize that you and most people would be thinking of the usual ordering on the natural numbers when you refer to the well-ordering. But it does not say that. And it CANNOT say that without using the axiom schema of induction, which would be begging the question. One cannot even show that the usual well-ordering of the natural numbers is well-defined without using induction. The only situation I can see where this proof would be used would be if you start with set theory and then try to define the natural numbers as a subset of the ordinals.
 * I do not understand your statement that "... when the natural numbers are defined "top-down" as a subset of the real numbers (as the intersection of all sets closed under addition and containing 1). In that case, we are given an ordering satisfying n < n + 1 for all n which is a well-ordering (provable without induction), but induction is not clear.". How can you show within the theory of the real numbers that the natural numbers are well-ordered? JRSpriggs 03:51, 8 September 2006 (UTC)


 * I apologize for the hasty and impolite rewrite. I had just seen a way of eliminating two assumptions (n+1 is the smallest number bigger than n, 0 is the smallest number) and charged ahead in my excitement put it into place. I would welcome any modifications to the proof in order to improve clarity - the ideal proof proves as well as teaches. Hopefully, we will still be able to get by without re-introducing the extra assumptions. I would also concur that the whole well-ordering of the naturals becomes problematic without defining them as a subset of ordinals - which then introduces extreme complexity. 64.42.233.61 15:50, 13 September 2006 (UTC)


 * One can define the real numbers as a field totally ordered in a manner which respects addition and multiplication by a number greater than zero and such that every non-empty set which has an upper bound has a least upper bound (not necessarily belonging to the set). One could then treat the naturals as a subset of the reals and show that the inherited order induces a well-order on the naturals. It's not too hard to show that this definition of real provides uniqueness - but constructing an example without using induction might be difficult. Standard constructions are often based off of the rationals (Cauchy sequences, Dedekind cuts) and rationals are often defined as a pair of naturals with an equivalence relation. I'd be mighty impressed with a construction of reals which does not rely on natural induction. 64.42.233.61 17:54, 26 September 2006 (UTC)

Frog example
The frog example is bad, since proving that jumping from one pod to the next is possibly does not necessarily imply that the next lily pod will carry the frog. 128.130.40.149 13:13, 10 October 2006 (UTC)
 * I suggest that YOU reword it to make it correct. Or replace it with a better analogy. JRSpriggs 09:49, 11 October 2006 (UTC)
 * I just read the article for the first time and I would like to thank everyone who has contributed. It is a very nice article. That said, the one thing that seemed strange and out of place was this frog jumping example. I was left with questions of reachability and left wondering what the example was trying to clarify. --Jake 19:56, 9 December 2006 (UTC)

Cut-the-knot reference?
At the bottom of the page, the link to cut-the-knot says "(Warning, this is not a reliable source.)". Why is this there? The c-t-k wikipedia page doesn't mention any reliability problems. -- BillWeiss | Talk 16:37, 17 October 2006 (UTC)


 * On the page referred to it says "... n here is an 'arbitrary' integer." in a context where n must be positive. JRSpriggs 07:12, 18 October 2006 (UTC)

Elevator proof
User:JIP contributed the following "proof":


 * With mathematical induction, it is possible to prove that the elevator in an apartment building is always going in the right direction when it arrives at your floor. The proof is quite simple:
 * Let n equal the number of floors. In the base case n=2, the proof is trivial: there is only one direction for the elevator to go to.
 * Consider the case for n+1 floors. If the elevator is on any of the n first floors, the claim is true by induction. If it's on the top floor, there is only one direction for it to go, and thus the claim is trivially true. Thus, the case for n+1 floors is proven.

I'm not sure if this proof is wrong (and facetious) or proper (and ambiguous), so I have removed it until he gives a coherent explanation of what "the right direction" means. See related discussion on his talk page.

In particular, if he means that "the elevator is always going in the direction that you wish it to go," then the proof is facetiously wrong because the statement "If the elevator is on any of the n first floors, the claim is true by induction" is false. Consider two people who enter an elevator on the ground floor going up; consider myself on the 3rd floor going down. If one of them presses "3" and another "5", then the elevator most certainly isn't going in the direction I want it to go when it arrives at my floor.

--Drostie 05:27, 24 November 2006 (UTC)

Who is JRSpriggs?
Yesterday I added a very simple paragraph about Fermat's method of infinite descent to this article, simply because I saw no such reference in the article yet, and that method is historically significant.

Today I found that "JRSpriggs" had changed a simple sentence into an obtuse piece of hyper-mathematical prose. I went looking for his user page, to discuss it with him directly, but there was no "User:JRSpriggs" page within Wikipedia. So I reverted his edit.

Now I see that "JRSpriggs" has already gotten involved in a few disputes on this page. Now I'm curious -- who is this guy? And why won't he leave his user page up so the rest of us can see it? DavidCBryant 15:45, 26 November 2006 (UTC)


 * I put back JRSpriggs's changes for two reasons. One of his changes corrected the spelling of one of the links; the name was "Pierre de Fermat", not "pierre de fermat", as your addition had it.  The other change was that he added an explanantion of the relationship between infinite descent and induction.  Since this is the article about induction, I felt that an explanantion of the connection  was required.


 * There is no requirement that any user have a user page.


 * Thanks for your contributions. -- Dominus 18:38, 26 November 2006 (UTC)


 * OK. Fine by me. I do wonder about accessibility, though. For the general reader, as opposed to trained mathematicians. DavidCBryant 22:21, 26 November 2006 (UTC)


 * I always worry about that too. But without some sort of explanation, won't a general reader be mystified about what infinite descent is even doing here in the first place?  So I thought it would be better to put in some explanation.  If you think you can make the explanation more accessible to the general reader, please go for it.   -- Dominus 22:45, 26 November 2006 (UTC)

To DavidCBryant: I have been editing articles (mostly in Math and Physics) since February 2006. I do not have a user page because I never created one -- I have no interest in talking about myself for the public. I do have a talk page, User talk:JRSpriggs, but I prefer to talk on the talk pages of the article in question. The reason I took out your sentence, "Therefore if the statement is not true when n = 1, it cannot be true for any natural number.", is two-fold: (1) it is unnecessary, because "... if a statement is true for a natural number n it must also be true for some smaller natural number m (m < n)." is sufficient to reach a contradiction without any additional assumption about one or zero; (2) we are using zero-based induction in this article (over some opposition which wants one-based) and mentioning one would just throw fuel on the fire. And the sentence "This is equivalent to using mathematical induction with the inductive hypothesis being that the statement is false for all natural numbers less than or equal to m." was added to fill the gap left by the sentence removed and for the reason mentioned by Dominus. JRSpriggs 07:38, 27 November 2006 (UTC)


 * Thank you for replying, JR. Do you mind if I call you JR?
 * I understand your reasoning. You're clearly well trained in mathematics. This may not be the right place to bring it up, but I liked the original paragraph better for two simple reasons – it was more intuitive, and not rigorous.
 * I understand the need for rigor in proofs. But an encyclopedia is not a textbook. My judgment is certainly fallible. But in my opinion, some statements in an encyclopedic article about a mathematical subject (especially an article about a relatively simple subject, like this one) should be phrased in layman's language calculated to elicit the Aha! response from a few readers who may not yet have contracted the math bug. So I was trying to steer away from big words like "equivalent" and "induction hypothesis" – not because I can't use them myself, but because I'm writing for a target audience.
 * I can also see that the question "is zero a natural number, or not?" will never be settled one way or the other. I didn't know that the other people working on this page had decided to answer that question in the affirmative. In the future I will read the talk pages more carefully.
 * Anyway, I have no desire to belabor the point about the target audience. Thanks again for writing back. DavidCBryant 23:07, 27 November 2006 (UTC)


 * I rewrote the last sentence to try to make it even clearer (merging our two versions). Now, it says "Using mathematical induction (implicitly) with the inductive hypothesis being that the statement is false for all natural numbers less than or equal to m, we can conclude that the statement cannot be true for any natural number n.". I hope that you like that better. JRSpriggs 06:37, 28 November 2006 (UTC)

I have added some links to a video Podcast and they were remove
Hi everyone I have added some links to a video podcast that I own. I think they are a nice addition to wikipedia please look at them and express you oppinion here , judge for yourself if the links are really useful or not to wikipedia.


 * Video PodCast by http://www.isallaboutmath.com/

If any of you think they are valuable to wikipedia then feel free to add them back in the external links.

Regards SilentVoice 03:26, 22 January 2007 (UTC)

"Floating" link
At the start of the Generalization section of the article is the phrase and wikilink to "Transfinite induction". It seems syntactically out of place. Perhaps a math-savvy editor would like to review this. --zenohockey 02:15, 18 February 2007 (UTC)

First use of induction?
From ArXiv: Diophantus' 20th Problem and Fermat's Last Theorem for n=4: Formalization of Fermat's Proofs in the Coq Proof Assistant. [cs.LO/0510011] David Delahaye (CEDRIC), Micaela Mayero (LIPN) in cs.LO updates on arXiv.org Oct 5, 04:30 AM We present the proof of Diophantus' 20th problem (book VI of Diophantus' Arithmetica), which consists in wondering if there exist right triangles whose sides may be measured as integers and whose surface may be a square. This problem was negatively solved by Fermat in the 17th century, who used the "wonderful" method (ipse dixit Fermat) of infinite descent. This method, which is, historically, the first use of induction, consists in producing smaller and smaller non-negative integer solutions assuming that one exists; this naturally leads to a reductio ad absurdum reasoning because we are bounded by zero. We describe the formalization of this proof which has been carried out in the Coq proof assistant. Moreover, as a direct and no less historical application, we also provide the proof (by Fermat) of Fermat's last theorem for n=4, as well as the corresponding formalization made in Coq. Maybe we should start a history section? --- Charles Stewart 02:41, 27 October 2005 (UTC)

Um, on that note, isn't Euclid's proof of the infinitude of primes basically inductive? At least from what I remember of a (unfortunately rather literally translated) version of his proof, it was...

Discrepancy Between Articles
In the section on the horse paradox it says it fails for n=2, but the article All horses are the same color says it fails for n=1. This is very confusing. 128.2.5.212 03:31, 8 May 2007 (UTC)

1 as a prime?
From the article : "...If m is prime then it is certainly a product of primes...". I suppose you take 1 to be a prime, and if so, since it is not by the common convention, please state that explicitly or (better) change the example. (unsigned)
 * I think the idea is that you see the prime as the 'product' of one term, the prime itself. So 1 is not taken as a prime (this would ruin the 'unique' part of the Fundamental theorem of arithmetic) ssepp(talk) 20:48, 11 May 2007 (UTC)

1 is a product of 0 primes. Michael Hardy 21:03, 28 August 2007 (UTC)

Equivalence to WOP
In the intro I read:
 * Indeed, the validity of mathematical induction is logically equivalent to the well-ordering principle.

what is it referring to? We have a section "proof or reformulation of mathematical induction" where it is shown that MI can be proven assuming WOP and some other axioms, but this is something different from "logical equivalence".--Pokipsy76 17:32, 28 August 2007 (UTC)


 * I've simply removed the sentence. --Lambiam 13:31, 9 January 2008 (UTC)

It seems to me that the idea is that an analogous proof would go through for any well-ordered set. That is, that any well-ordered set admits a proof technique analogous to mathematical induction.

For example, on the ordinals, we have transfinite induction; on trees or lists over well-ordered sets we have the appropriate varieties of structural induction, and so forth.

-- Dominus (talk) 15:07, 9 January 2008 (UTC)


 * There is in fact a discussion of the issue that almost says this (and certainly implies it) at Structural induction. In fact, we don't need a (total) well-order; a (partial) well-founded relation suffices. The following two statements are logically equivalent in standard predicate logic: (1) induction is valid for some domain with a given relation where the induction process does not use a "base case" but only an induction step, which consists of concluding to the validity of a property for some element from the inductive hypothesis of the validity of that property for all predecessors of that element in the given relation, and: (2) the given relation is well-founded. I'd be happy to add this to the encyclopedia, if it were not for the requirement of citing published reliable sources, which I don't have for this.
 * For the naturals specifically, however, the article Well-ordering principle seems an obfuscation to me, what with the naturals being a subset of the integers and the consideration of the embedding of the naturals into the reals as a complete space, and the mention of Modus Tollens. --Lambiam 00:49, 10 January 2008 (UTC)


 * the reason it's stated as being logically equivalent is that, not only can you get MI from assuming the WOP principle and some of peano's axioms, but you can get the WOP from assuming MI and some of peano's axioms. each one logically leads to the other, hence they're considered logically equivalent. if i feel ambitious to look through old math texts for a citation, i will at a later time.

Is their a proof in predicate logic of Mathematical induction?
--Philogo (talk) 00:31, 29 February 2008 (UTC) --Philogo (talk) 21:50, 25 February 2008 (UTC) —Preceding unsigned comment added by Philogo (talk • contribs) 21:46, 25 February 2008 (UTC)
 * Usually induction is given as an axiom or axiom scheme; see Peano arithmetic. Using category theory one can define the natural numbers as an initial algebra; this definition can also be expressed in the usual axiomatizations of set theory, and then the validity of induction can be proved using higher-order logic. --Lambiam 11:00, 26 February 2008 (UTC)
 * Yes I know Usually induction is given as an axiom or axiom scheme; but I have not seen a proof in predicate logic. Have you seen one? where can I see a proof in higher-order logic. --Philogo (talk) 13:21, 26 February 2008 (UTC)
 * It is not strange that you haven't seen a proof in predicate logic, because the principle cannot even be formulated in predicate logic. Usually mathematicians do not give theit proofs in a formal proof system. Proofs of the axiom of induction for various higher-order logics are given in (Hatcher, William S., 1982. The Logical Foundations of Mathematics. Pergamon). --Lambiam 19:54, 26 February 2008 (UTC)
 * Thanks I see if I can lay my hands on same. I am not sure whether it is true to say that the principal cannot be formulated in predicate logic.  If as you say it can be proven in higher-order logic, and any higher-order logic is a predicate logic, then it msut be expressible in that higher-order logic and so a fortiori in predicate logic.   Mendelson, Mathematical Logic, Van Nosran Reinhold, 1964, page 103 has

(S9) For any wf Ά(x) of S, Ά(0) ((x)(Ά(x)  ((x)(Ά(x')) (x)Ά(x))

... (S9) is an axiom schema providing an infinite number of axioms.

These axioms would be expressible in just first order predicate logic. So my question is whether there is any proof that any wf Ά(x) of S, Ά(0) ((x)(Ά(x)  ((x)(Ά(x')) (x)Ά(x)) is a theorem. --Philogo (talk) 20:55, 26 February 2008 (UTC)


 * If the induction formulas have not been added as axioms, then for specific wfs there may be a proof, like when (∀x)ℬ(x) is a theorem, but lacking axioms for induction, such induction formulas cannot be proved in general, since there are nonstandard models of the integers for which induction does not hold. A simple nonstandard model is to adjoin an extra element ω to the standard naturals for which ω' = ω. Then if ℬ(x) is the wf x' ≠ x, the induction formula for this wf fails to hold in the model and should therefore not be provable (unless you're working with an unsound system). --Lambiam 22:18, 26 February 2008 (UTC)


 * The first=order theory S referred to has a single two place predicate letter A and we may t=s for A(t,s); one individual constant a (written as 0) and three function letters f,g and  h and we may write t' for f(t); t+s instead of g(t,s) and t' instead of h.  S has eight proper axioms (I will not bother to list them) and the and axiom schema (S9) referred to above. ((S1) thru (S9)  together correspond to - with one proviso - Peano's Postulates).

S, with the proper axioms, is of interest since adequate for the proofs of all the basic results of elementary number theory -  according to Mendelson. What I am interested in is whether, for any wf Ά(x) of S,

"Ά(0) ((x)(Ά(x)  ((x)(Ά(x')) (x)Ά(x))"

is a theorem; presumably not otherwise we would not need it as an axiom schema! But for any wf Ά(x),

"Ά(a) ((x)(Ά(x)  ((x)(Ά(x')) (x)Ά(x))"

eg

"F(a) ((x)(F(x)  ((x)(F(x')) (x)F(x))"

intuitively looks like a logical truth. Were it one, or were it included as a (logical) axiom, we would not need it as a proper axiom. Hence my original question, "Is their a proof in predicate logic of Mathematical induction?" or more precisely perhaps, is their a proof in predicate logic, (not necessarily first-order) of any wf of the form:

"Ά(a) ((x)(Ά(x)  ((x)(Ά(x')) (x)Ά(x))"

Perhaps the answer is in Hatcher, William S., 1982. The Logical Foundations of Mathematics. --Philogo (talk) 00:24, 29 February 2008 (UTC)


 * No, it isn't possible to prove arbitrary induction instances using logic alone. It's easy to create structures in which various induction principles fail - start with the natural numbers and adjoin some extra elements greater than all the natural numbers. In the division between "logical axioms" and "mathematical axioms", the induction ones fall in the latter category. &mdash; Carl (CBM · talk) 02:23, 29 February 2008 (UTC)
 * it follows, does it not, that there is no proof of, eg

"F(a) ((x)(F(x)  ((x)(F(x')) (x)F(x))"

no matter what interpretation we have of the ' function.--Philogo (talk) 13:34, 29 February 2008 (UTC)
 * It will depend on what formula F is. If F(x) is "x=x" then that particular induction axiom will of course be provable. Or maybe F is a logically valid sentence with no free variables at all. But there are many formulas F for which the corresponding induction axiom is not provable. &mdash; Carl (CBM · talk) 14:55, 29 February 2008 (UTC)

Thanks for that. On reflection I I believe I should have said, it follows that the sentence

(S9a) F(a) ((x)(F(x) ((x)(F(x')) (x)F(x))

is not logically true because there is at least one interpretation where it is false when the domain of the argument to F is not denumerable. I had thought that the function ”’” ensured that the domain of x was denumerable. Assuming not, then, were there some wff, G x, which would be true just in case  the domain of x was denumerable then its conjunction with (S9a), i.e.  (S9a) & G(x) i.e.

F(a) ((x)(F(x) ((x)(F(x')) (x)F(x)) & G(x)

might be a logical truth and if it were a logical truth but there were no proof of it in any theory, T, then any such theory T would not be complete. My apologies if this is all old stuff but I have not found much relevant discussion on this post Frege.--Philogo (talk) 18:22, 29 February 2008 (UTC) --Philogo (talk) 18:17, 29 February 2008 (UTC)--Philogo (talk) 18:29, 29 February 2008 (UTC)


 * I'm not sure what you mean by "logically true". Many theories are incomplete, and you cannot hold that against truths that are not covered by some such theory, but are included in another one.
 * In the counterexample I gave above to (S9), namely with the domain of non-standard "naturals" N ∪ {ω}, the domain is denumerable, so non-denumerability is not the issue.
 * The formulas you give for (S9) and variants are hard to interpret because the parentheses are not balanced. I don't have the 1964 edition of Mendelson, but the axiom schema (S9) I see in Mendelson (4th edition, 1997, ISBN 978-0412808302, page 155) is:
 * ℬ (0)  ⇒   ((∀x)( ℬ (x) ⇒ ℬ (x &prime;  ))  ⇒  (∀x) ℬ (x)).
 * Forming the conjunction (S9) & G(x) is meaningless, since the variable x is not bound. There is no predicate of first-order theory expressing that some domain is denumerable. In giving proofs in logic, it is irrelevant what properties a predicate such as G is meant to have or not have in an intended interpretation ("true just in case the domain of x was denumerable"), since in the proof you have no access to the interpretation, so you cannot use this extra-logical knowledge.
 * The successor function " &prime; " is just an arbitrary symbol in logic; like all predicate and function symbols, it has no other properties than follow from given axioms.
 * Basically everything CBM wrote above repeats points I already made in my reply from 22:18, 26 February 2008. I am sorry I was not clearer, but let me once more repeat the situation:
 * If you add induction as an axiom schema, then (of course) you can now prove it for any wf of your logic, since each axiom is a theorem.
 * If you do not add induction as an axiom schema, you can not prove it for arbitrary wfs. (That is precisely the reason it is added as an axiom schema: that you cannot prove it otherwise.)
 * It is possible to prove the induction rule for some specific wfs, such as the tautological wf x = x.
 * --Lambiam 15:06, 1 March 2008 (UTC)


 * Also, the issue of denumerability isn't important. Any first-order structure has an elementarily equivalent denumerable substructure, by the Lowenheim-Skolem theorem, so we could assume all structures are denumerable if we are only interested in whether they satisfy specific sentences. &mdash; Carl (CBM · talk) 15:14, 1 March 2008 (UTC)


 * Not sure as to the state of indentation here... but Lambiam (and others interested) should check out non-standard arithmetic and the compactness theorem, which is very useful for proving that certain properties are not expressible in first-order logic. For example, if you take the structure $$\mathfrak Z$$ with universe $$\mathbb Z$$ and 2-ary relation A for which says "x and y are adjacent" - i.e. $$\langle x,y \rangle \in A^{\mathfrak Z} \leftrightarrow |x-y|=1$$, then you can prove that there is a structure $$\mathfrak Z'$$ which is elementarily equivalent to $$\mathfrak Z$$ (i.e. satisfies exactly the same sentences), but is disconnected - in the sense that there are two elements of $$\mathfrak Z'$$ which are not connected by a finite sequence of adjacent (according to A) elements.  Thus there is no sentence in first-order logic which is true of a structure if and only if it is connected, since if there were, $$\mathfrak Z$$ would satisfy it and $$\mathfrak Z'$$ would not; but we constructed them to be elementarily equivalent.  69.181.216.178 (talk) 22:27, 8 May 2009 (UTC)

Axiom of Induction
Is not the formula for Axiom of Induction wrong? It should be a & there it is a ^. Reko (talk) 06:05, 10 April 2008 (UTC)


 * The wedge is used in formal logic to mean "and" (or more accurately, in the intended interpretation, the wedge is interpreted as "and). --196.209.178.209 (talk) 20:05, 10 April 2008 (UTC)


 * For the notations used, see Logical connective. --Lambiam 22:13, 10 April 2008 (UTC)

Equivalent proof by contradiction
Is it worth mentioning in the article that proof by induction can easily be converted to proof by contradiction by use of the logical contrapositive? (or would it just confuse non-mathematicians further?) The advantage of giving the argument as a proof by contradiction is that it avoids the "problem" of an infinite number of steps.  D b f i r s   19:55, 4 July 2008 (UTC)


 * EVERY proof by mathematical induction can be written as a proof by contradiction by saying that if the proposed theorem is false, then there is a smallest counterexample. Then one show that the number immediately preceeding that counterexample would also be a counterexample, contradicting the "smallest" nature of the smallest counterexample.  Probably that should be mentioned somewhere in the article.  But not just as a fact concerning only that one proof. Michael Hardy (talk) 21:07, 4 July 2008 (UTC)


 * It seems to me that the concepts (induction and contrapositive) are orthogonal. It also seems to me that there is no reason to bring in the logical contrapositive. It would just confuse the article with TMI (too much information). DeaconJohnFairfax (talk) 22:10, 22 July 2008 (UTC)


 * The contrapositive ( take the smallest false case and show there must be a smaller one) is Fermat's "method of infinite descent". Maybe it could be spun off into a separate article (if it hasn't been already) if we don't want to discuss it here.CharlesTheBold (talk) 18:42, 7 December 2009 (UTC)

Redundant sections on "Transfinite Induction" - recommend delete one.
The subsection called "Generalization" is redundant with the section called "Transfinite Induction." I recommend that one of them be deleted. It seems to me that it would be better to delete the one called "Generalization" because it contains less information and is more difficult to understand than the one called "Transfinite Induction." DeaconJohnFairfax (talk) 22:13, 22 July 2008 (UTC)
 * Done. Good catch. --Trovatore (talk) 22:17, 22 July 2008 (UTC)

Homework/Geometry
WAT IS THE MEANING OF CONSECUTIVE EVEN —Preceding unsigned comment added by 71.100.169.80 (talk) 00:10, 24 August 2008 (UTC)

A Thank-You to the Editors of the Page
The article took about seventy pages of my textbook and condensed the result into a (readable) two to three page document. You've been a wonderful help. As this evolves please continue to remember the non-experts. Nickjost (talk) 21:40, 27 September 2008 (UTC)

Example
The example section should use k rather than n for the proof. It is against convention to reuse n in the proof. Opinions? Tparameter (talk) 02:25, 20 May 2009 (UTC)

good point "m" or any other random constant should be used in place of n —Preceding unsigned comment added by Sidhu 2201 (talk • contribs) 14:29, 12 October 2009 (UTC)

I belive it should be k rather than n aslo. see below (BTW this is just a draft of how it should look with basic steps)

Prove that $$0+1+2+...+n=\frac{n\left( n+1 \right)}{2}$$

solution

$$\begin{align} & \text{let n = 1} \\ & 0+1=\frac{\left( 1 \right)\left( 1+1 \right)}{2} \\ & 1=\frac{\left( 1 \right)\left( 2 \right)}{2}=1 \\ & 1=1 \\ & \text{let }n=k \\ & \text{assume:} \\ & 1+2+...+k=\frac{k\left( k+1 \right)}{2} \\ & \text{proof:} \\ & 1+2+...+k+\left( k+1 \right)=\frac{\left( k+1 \right)\left( \left( k+1 \right)+1 \right)}{2} \\ & \underline{1+2+...+k}+\left( k+1 \right)=\frac{\left( k+1 \right)\left( \left( k+1 \right)+1 \right)}{2} \\ & \frac{\left( k+1 \right)\left( \left( k+1 \right)+1 \right)}{2}+\frac{\left( k+1 \right)}{1}=\frac{\left( k+1 \right)\left( \left( k+1 \right)+1 \right)}{2} \\ & \frac{k\left( k+1 \right)}{2}+\frac{\left( k+1 \right)}{1}=\frac{k\left( k+1 \right)\times 1+2\times \left( k+1 \right)}{2\times 1} \\ & =\frac{k\left( k+1 \right)+2\left( k+1 \right)}{2} \\ & =\frac{{{k}^{2}}+k+2k+2}{2} \\ & =\frac{{{k}^{2}}+3k+2}{2} \\ & =\frac{\left( k+1 \right)\left( k+2 \right)}{2} \\ & =\frac{\left( k+1 \right)\left( \left( k+1 \right)+1 \right)}{2}=RHS \\ &  \end{align}$$

We have shown it is valid when n = 1 and we a proofed that it is valid for n+1 therefore the statement is valid for n= 1, 2, 3,…

James137 (talk) 00:30, 4 November 2010 (UTC)
 * Isn't the proof circular? It assumes that 1 + 2 + ... + k = k(k+1)/2, which is the statement that it attempts to prove.
 * Fungalmungal (talk) 03:07, 9 November 2010 (UTC)

no.. it's not circular. There is 2 things you need to do this:

1. Prove that the formula is right for n=1

2. Prove that if the formula is right for n, then it is also right for n+1

Thus, you will get, "If it's right for n=1, then it's right for n=2. If it's right for n=2, then it is right for n=3. If it's right for n=3, then it's right for n=4. And so forth". Wisnuops (talk) 05:20, 26 March 2011 (UTC)

1st vs 2nd order logic
I added a mention that the axiomatic definition in the article used second-order logic, since the issue of what constituted a property that one could use induction on bugged me for a long while between when I learned about (unformalized) induction in high school algebra, and when I studied it more formally much later. If it seems too distracting then please feel free to remove it. I have the feeling that it helps the exposition, but I'm not really sure of this. 66.127.55.192 (talk) 09:36, 17 February 2010 (UTC)

Subtle Disinformation Found and Removed, More May Be Present
Please see: http://en.wikipedia.org/w/index.php?title=Mathematical_induction&action=historysubmit&diff=332756958&oldid=331892638

Here an editor using IP 99.141.191.52 changed four numbers in al Karaji's proof, simply incrementing them all by one. The edit summary for the post was simply, "trolling", which was a completely honest description and an example of hiding in plain sight.

As I am not an expert on this subject nor have thoroughly read the bulk of the article, it's possible that other instances of insidious abuse are still present. Please be on the lookout.

Riyuky (talk) 16:54, 24 February 2010 (UTC)

Boolos, the sorites, and "Down the Slippery Slope"
An anonymous editor recently added a mention of the sorites paradox, which I think was a good idea. But to address this properly requires more discussion, I think, because the sorites paradox appears on its face to be a refutation of induction. George Boolos wrote an essay "The Justification of Mathematical Induction", reprinted in his anthology Logic, Logic, and Logic, which I think addresses this point adequately. It might usefully be mentioned here. I don't have a copy of the essay handy, but I will eventually try adding something about the sorites, based on that essay, if nobody else does it first. —Dominus (talk) 17:44, 4 March 2010 (UTC)
 * I had the wrong essay. It is indeed in Logic, Logic, and Logic, but the essay I wanted is Chapter 22, "Zooming Down the Slippery Slope".  The essay deals directly with the sorites paradox, considering explicitly the argument: A. Zero is small.  B. If n is small, then n+1 is small.  C.  Therefore, one billion is small.   C follows from A, B, and the induction principle.  Boolos is unwilling to discard the induction principle, and wants us to stipulate that A is true and C is false (if not, change the constants) and so the induction premiss B must fail.  But, as Boolos says, "the trouble is that it looks true".  I will take a closer look at the essay and see if there is anything there that might be usefully added to this article, or to the sorites article.  —Dominus (talk) 21:04, 9 March 2010 (UTC)


 * It sure looks wrong to me. Suppose "1" isn't "small" (whatever that means!). Then even though n may be "small" (e.g. 0.000001) then n+1 isn't "small". Now If n is an integer and 1 is an integer, and then none of n, n+1 etc are "small". BillWvbailey (talk) 23:31, 9 March 2010 (UTC)
 * Well, n can't be 0.000001 &mdash; in context, n is clearly a natural-number variable, and is not allowed to take real-number values. I think you don't have to agree with Boolos's argument, but you should give him credit for not making one that's trivially wrong.  In particular you need to pick a notion of "small" for which the argument is not obviously wrong on its face. --Trovatore (talk) 23:52, 9 March 2010 (UTC)
 * Ahh, context, context, and context. Let's accept the context. Then, the question of how "small" is defined, and in particular the relationship between "1" and "small", leaps out like the proverbial hammer hitting the smashed thumb. So let's follow this through to the bitter end. Specify a little relational matrix such as { small R small yields small, small R large yields large, large R small yelds large, large R large yields large}, and let's define the relationship R as the classical intuitive "successor" (whatever that means). Then we have to define 1 as "small" or as "large". Define 1 as "small". Is "0" large or small? Define "0" as small. Per our relational matrix, 0+1 --> small+small --> small. Etc. Ergo all numbers are "small"; each successor inherits the characteristic "small". This is like a symbol "6" inheriting from symbol "5", via the relationship "+ 1", the characteristic "integer". About all I'm getting from this trivial exercise is that the symbol string "small" that we've associated with 0 and with 1, together with the relation-matrix and our definitions means that "small" is inherited as a characteristic (like "integer"). Nothing more. Bill Wvbailey (talk) 01:20, 10 March 2010 (UTC)
 * I see no value in arguing with my two-sentence summary of Boolos's article. The best one could hope for would be to refute my summary without ever fully engaging the real argument.   I also see little value in arguing directly with the article on this talk page.  The article is published, peer-reviewed scholarly research  by a well-regarded mathematical logician.  Wikipedia core policy is that this material is appropriate for inclusion regardless of whether we think it is correct.  —Dominus (talk) 05:42, 10 March 2010 (UTC)
 * Hey, you brought this up on a talk page. Ergo the people talk. I assert that your premise "If it is published then it must be put in a wiki article" is false. The question isn't whether it is published the question is "Is it useful to this article?" I'd say, based on the fact that, when I read your talk-page summary of Boolos' little whatever-it-is and even my non-mathematician's hair (what's left of it)stood on end, this should give you pause. At best this "infinite sorities" argument seems a distraction, and as I kind of demonstrated above, its apparent "punch-line" derives from its confusing formal operations on symbols with their meanings as defined outside the system. To my mind the "horse of a different color" section also is distracting in a similar way.


 * I did not say "If it is published then it must be put in a wiki article"; I only said it was appropriate for inclusion. And if you think that discussing the content of an article you haven't read is a good use of your time, please go ahead.  —Dominus (talk) 14:49, 11 March 2010 (UTC)


 * I am reacting to what you wrote, not what Boolos wrote. Boolos is not going to write the entry into this article, you are. It's important that you have it correct if you insist on adding it. But two more problems come to mind with your rebuttals. (1) You insist above that because an eminence gris (aka Boolos) wrote something, it must be good/wonderful/true. This is a pronouncement ex cathedra. We should be debating the argument per its own merits and not accept pronouncements ex cathedra, (2) A sorities is a circular argument (at least my Webster's New Collegiate Dictionary says it is) -- e.g. A-->B & B-->C & C-->D & D-->A. Closing the loop is necessary for an argument to be a sorities (according to my dictionary). I checked 18+/-1 logic books (no kidding, including Boolos, Burgess, and Jeffry 2002 Computability and Logic Fourth Edition) but found nothing in their indices re "sorities". At this time I can see no way that a sorities has anything whatever to do with induction, but maybe you can expand the Boolos preentation and show us what's going on in more detail. Bill Wvbailey (talk) 00:27, 12 March 2010 (UTC)


 * I did not insist that because an eminence gris wrote something, it must be good, wonderful, or true.  I only said that scholarly research published in peer-reviewed literature was appropriate for inclusion in Wikipedia.  (I am glad that this time you did not falsely attribute a manufactured quotation to me; thank you.)  As for your ignorance of the sorites paradox and your inexplicable failure to find it in at least 17 logic books, in the Wikipedia article to which I linked in my initial message, or in the references cited in that article, you might try:, and particularly the sections of that article that discuss its relationship to mathematical induction.  —Dominus (talk) 17:32, 12 March 2010 (UTC)


 * RE improving the article: This article could use some "philosophic" development. I've read that crows can count to a certain number. Does this say something "deep" about neural wiring and its relationship to "number sequence"? And just what is the human experiential relationship between "inductive reasoning" and "mathematical induction"? I remember a fleeting reference to the fact that Brouwer tried to base "number sequence" on an intuitive notion of "time sequence". And there's the Melzak-Lambek machines based on digging a hole ("empty", "zero"), then throwing into or removing pebbles (discrete objects) from the holes per an instruction-list (so what is the relationship between an instruction list and induction?), and Post-Turing machines with their primitive tally marks (ditto). And how does the intuitive notion of one-to-one correspondence play into "induction" e.g. the way we used to keep track of distance in the army (put a pebble in the pocket for each 100 steps), shepherds keeping track of the count of their flocks (tally marks carved in sticks as the sheep pass through a gate, or pebbles moved between two bags)? What do the various "schools" -- Logicism, Intuitionism, Formalism, Platonism -- have to say about "mathematical induction"? Bill Wvbailey (talk) 16:12, 10 March 2010 (UTC)

I still don't get it... What if a domino was missing?
Perhaps it's just me, but after reading the entire article I can only think... Well what if I removed the 800th domino such that the 801th domino won't fall, or that it was missing in the first place. Thus the proof, isn't really a proof but just a big assumption or hope that everything in the world is right - and of course, it isn't. I just don't get it, how could any of this be used to prove anything? --Balupton (talk) 13:50, 19 May 2010 (UTC)


 * If the 800th domino is missing then the proof fails to show that anything holds beyond case 799. But suppose for example that you want to prove that
 * $$ 1 + 2 + 3 + \cdots + n = \frac{n(n+1)}{2}.$$
 * The proof doesn't involve any missing dominoes, so it works. That's an example of how it can be used to prove something.  The formula above works when n = 1, and if it works for any particular n, then it works in the next case after that, with n + 1 instead of n.  Hence all the dominoes fall. Michael Hardy (talk) 16:22, 19 May 2010 (UTC)

---


 * Here's my take on it: the axiom of induction is just one of a collection of axioms -- commonly called the Peano axioms -- that assume the existence of an entity we name 0 (in some versions 1, as noted below 0 is similar to "empty tape-square" or "empty hole") and the "successor", in particular the "successor of 0" (that we name "1") and the successor of any successor; we commonly name these thingies that we write/symbolize as 0, 0', 0' ', 0' ' ', etc, or as 0, (0)', ((0)')', (((0)')')', or some other symbolism from set theory, etc, "the (counting) numbers", and we assign them funny symbols and funnier names like "1" and "2" and "3" that are nothing but abbreviations.


 * Exactly what a 0 really is, and successor really is, is in the Formalism point of view defined only by their behavior relative to the axiom-set that we (humans) define (thus there can be alternate axiom-sets). A Platonist (Platonism) would say that these things, whatever they are, really, really exist somewhere in a non-material universe that somehow our minds do have access to, hence we "come to know them" or "discover" them in the other sense of "induction", i.e. scientific induction of repeated observation followed by theorizing -- This was Kurt Goedel's and Emil Post's belief.


 * One view is that these are the behavior of "objects" in an utterly-mechanical device, as directed by an utterly-mechanical "table of instructions". There are a couple wonderful models of this -- the Post-Turing machine, and the counter machine. The design and behavior of either one embodies the axioms. But in particular the zero and the successor have a few usually unspoken qualities that you allude to in your example of the 800th missing domino, or e.g. the dominoes are misplaced, or one is short, or two fall at once.


 * With regards to the "0": In the Post-Turing-machine model, each tape square is the roughly the same size (although they don't have to be), and they are "contiguous in space" -- "located one after another" in a line so that either the observational "head" moves from side to side (room-to-room in Emil Post's model, square to square in Turing's model, left or right) or moves the tape left or right (a version of the Turing model). So this presupposes the existence of space and time and mass (contiguity, before and after, mechanical-thought-model). Any "blank square" is a location in space -- it is not nothingness; a blank square is not nothingness, it is a "thing" (matter, i.e. a blank sheet of paper is not nothingness). So "blank square currently being observed" might be symbolized/described by the cypher "0" and condition "blank", but this "0" aka "blank" is not nothingness, it is "blank square". In the same way, in the counter model, "empty hole labelled A" might be symbolized by "[A] = 0" ("contents of hole labelled 'A' is empty"), but this again is not a condition of nothingness (e.g. an empty glass is a thing).


 * With regards "the successor": We usually "idealize" the notion into a ultra-precise, measured "unit", what we think of as a "discrete" object (mass-length, or mass-volume, time-duration), such that all "1"s are "the same size", the "same shape", and have the same "mathematical power", and there are as many of them as we need (we can always create more if we need them). For the models, what's important is that the machine always moves its tape left and right one (discrete, unit) square at a time (Turing model, or the worker moves left or right one room at a time). Thus the Turing model presupposes/embodies the notion of "discreteness" and the notion of the "successor/predecessor", and it either marks or erases only the "observed" square only one mark per square. (Similarly for the Post model of a worker moving from room to room marking (only one mark allowed) or erasing a sheet of paper). And in the counter-machine model, each stone thrown into a hole is roughly the same size (needn't be, though), but it is not an amorphous blob that splits when it hits the bottom of the hole, nor does the impact split a stone already in the hole, nor does a single stone fills up the entire hole (we would have to dig the hole deeper if this happens -- the hole is "infinitely extensible" as is "the tape" or "the rooms"). So there are a lot of tacit assumptions about discreteness and emptiness and mechanical locations in space and behavior in time.


 * This may seem to have devolved into philosophy, engineering, and/or physics. But at least one well-respected mathematician has his doubts about the true-in-this-universe existence of "the infinitesimal" (infinitely divisible). Here's Gregory Chaitin commenting on discrete versus continuous:
 * "But I'm a mathematician, and each time I would wonder if Nature wasn't really trying to tell something, namely that real [he means numbers such as pi with an apparent infinite number of decimal places] and continuity are a sham, and that infinitesimally small distances do not exist!. . . . So, you see, there are lots of reasons to suspect that we might live in a digital universe . . . I'd like to continue in Zeno's footsteps as well as Rolf's and argue that a number with infinite precision, a so called real number, is actually rather unreal."(boldface in original, pages 92-93: Gregory Chaitin 2005 Meta Math! The Quest for Omega, Pantheon Books, NY, ISBN 0-375-42313-3.)


 * Hope this helps. Bill Wvbailey (talk) 17:12, 19 May 2010 (UTC)

Where do I go wrong?
Suppose I prove statement (S)omething to be true for all integers n by showing and then go on to show that the assumption that there is an integer k for which S is not true leads to a contradiction. [In particular, there woul'd be a least such integer which cannot be 0 or, say, 778.]
 * S is true for n = 0,
 * S is true for n + 1 assuming it is true for n,

I claim that I have proved statement S to be true for all integers. I claim too that I have not used the Axiom of Induction. I'm quite sure about this.

What I am not so sure about is why the Axiom of Induction is just that, i.e. an axiom, not a theorem. A naive way to "prove" the Axiom of Induction would be to say that every proof using induction can be converted to the type above, hence you can use induction at will (saves a few lines of typing), hence the Axiom of Induction is true.

I suspect that one error is the claim that EVERY statement provable by induction is provable by the above method, not only those seen by mankind so far.

Perhaps the answer to my question is in this talk page already (or in the article), handn't read everything yet. YohanN7 (talk) 12:42, 11 July 2010 (UTC)

Apparantly, the Axiom of Induction (when stated formally) cannot be proven from e.g. the Axioms of ZFC. (Suppose this has to do with first order logic, second order logic, etc. I have no such background unfortunately) My question (the naive version) remains. YohanN7 (talk) 12:53, 11 July 2010 (UTC)

Hmm... In [peano axioms] it says "it is not possible in first-order logic to say that any set of natural numbers containing 0 and closed under successor is the entire set of natural numbers."

Ok, so I go wrong there too! Whats wrong with the contradiction type proof then? I.e. assuming that the set of integers for which S doesn't hold is nonempty, using wellordering, reaching a contradiction concluding S is all of the natural numbers. YohanN7 (talk) 13:21, 11 July 2010 (UTC)


 * The axiom of induction (more precisely, each instance of it) is provable in ZFC. It isn't provable in PA minus induction. --Trovatore (talk) 21:55, 11 July 2010 (UTC)


 * That clarifies matters. It's kind of natural too that the PA article (which is linked in this article in the section stating the Axiom of Induction) assumes PA and nothing else.


 * However, in the present article I can't see any default context. "Mathematical Induction" can be PA by default (because it's about natural numbers) or it can be ZFC (because it's within "mathematics", usually defaulting to ZFC). How about adding something like this:


 * The Axiom of Induction is a provable statement in first-order logic in some systems larger than PA, for instance, in Zermelo-Fraenkel Set Theory. YohanN7 (talk) 09:00, 12 July 2010 (UTC)

I feel pretty stupid. The whole thing is there. I managed to completely miss the section "Proof or reformulation of mathematical induction". Sorry about that. YohanN7 (talk) 09:26, 12 July 2010 (UTC)

Proofs, and the terms "well defined", "well-ordered" and "well-founded"
In the "Proof or reformulation of mathematical induction" section, there are two proofs of simple induction. The first one is a very trivial one showing that simple induction follows from transfinite induction. Do we need that? We might need that, but as it stands now it purports itself to be an alternative to the actual proof squeezing induction out of the axiom of choice. Perhaps it would be better off in another section? Also, the real proof (apart from being in need of some math-formatting) uses the sentence "Hence S is well defined" which I guess is a typo for either "well-ordered" or "well-founded" – the way it's worded here we do need that < is a (strict) well-ordering, but if worded differently we would only need that it's a well-founded relation. And in any case, we should probably not say that the set itself is a "well-founded set", as we do in the "Transfinite induction" section, since (according to Well-founded relation) the term "well-founded set" has another meaning. 85.226.207.108 (talk) 10:28, 12 July 2010 (UTC)

Domino vs. Ladder
The Description section currently says,

"It may be helpful to think of the domino effect; if one is presented with a long row of dominoes standing on end, one can be sure that:
 * 1) The first domino will fall
 * 2) Whenever a domino falls, its next neighbor will also fall,

so it is concluded that all of the dominoes will fall, and that this fact is inevitable."

I dislike this for several reasons. First, the row of dominoes is "long" but finite, while induction is used for infinitely many cases; this idea should be included even in informal descriptions. Second, in physically working with dominoes, they hang up a lot and are generally quite imperfect, which might have been a source of confusion in the above section on this talk page. Third, you don't usually stop dominoes at a particular position.

I find dominoes less intuitive to describe induction than a ladder analogy. Take an infinitely tall ladder. Suppose you can stand on the first rung, and you also possess the ability to go from one rung to the next. Inductively, you can stand on any rung. Ladders don't suffer from my complaints about dominoes, either. Thoughts? 67.158.43.41 (talk) 06:48, 4 January 2011 (UTC)


 * There is some merit to your complaint, but your suggestion may have a similar problem. As I read this, the image that popped into my mind was a tricky set of dominoes, each equipped with an automatic reset so a little while after it falls, it will pop up again. (Not off-topic: where I'm going with this is RE an attempt to use induction in a proof of the Collatz Conjecture). We arrange the dominoes so that the pattern eventually comes to a circle, or in a special figure eight, etc. i.e. any closed shape. So once you start them going they will do their linear domino thing, but depending on your pattern, they may (or may not) enter into a loop that goes 'round and 'round, ad infinitum. RE the Collatz Conjecture, if you pick the right coefficients, the sequence of generated numbers displays similar mysterious behavior. You start with any "seed" number and then crank the formulas. The resulting sequence of numbers descends toward 1, maybe for quite a few iterations, but then the numbers start up again, and then down again (they may have done this a number of times) but sometimes (not always!) they enter a loop and never get to 1 (worse yet, for a given pair of coefficients, there may be more than one loop a sequence can fall into). Thus an application of induction in a generalized (dis-) proof of the Collatz Conjecture would seem likely to fail us. Even a ladder, infinitely long, seems to have a similar problem: while it may be infinitely long, it may eventually go 'round and 'round, somewhere "out there". And if it's a really big circle, without the benefit of gravity, you won't know you're in a loop until you came back to a "junction" . . . and if, like a wagon-horse of yore, you're wearing blinders and you can only see the next ladder rung ahead, you won't know you're in a circle: you may be going 'round and 'round ad infinitum, but you're stuck in a circle. Bill Wvbailey (talk) 15:52, 4 January 2011 (UTC)


 * I'm familiar with the Collatz conjecture, but I didn't follow your reasoning. It seems a typical inductive proof of that conjecture would use strong induction to assume the conjecture is true for starting values 1, ..., n. With a ladder, that would be equivalent to possessing the ability to reach rungs 1 through n. I don't understand how closed loops fit into this analogy. The key ingredient to the proof would be showing you can reach rung n+1, i.e. that the conjecture is true for n+1. (I do understand there are "loops" in the trajectories of some starting values to the Collatz function, e.g. starting at 4, but that seems immaterial.) It would seem to be very strange for a ladder to form a closed loop, while having dominoes do so is actually pretty reasonable and seems to be another argument against dominoes and for ladders, since a closed loop analogy does not capture the essence of induction. 67.158.43.41 (talk) 13:35, 9 January 2011 (UTC)

Better Parsing for Axiom of Induction?
Sorry, I think I put this in the wrong place before. Never edited on a talk page before.

Hi, I don't generally post comments or edit wikipedia but after scrutinizing this article's formal symbolization of the arithmetic principle of induction, I would like to offer some criticisms and then suggest some changes. As such, I think that the axiom of induction as stated here in formal predicate logic is either incorrect or highly misleading. The problems I see are as follows: (1) generally one does not put parentheses around quantification. In fact, I should be more blunt: no one puts parentheses around quantifiers themselves. I.e., the parentheses around $$ (\forall P) $$ are not necessary and confusing. This leads to a lot of parsing difficulties, especially in the first antecedent. (2) Although it is acceptable, in modern logical notation it is not very common nowadays to put parentheses around the bound variables attached to a predicate. I.e., P(k) looks quaint to put it mildly. Where parentheses are necessary, as in the case of P(k-1) they should be adopted, but otherwise it is common practice to leave them off. (3) The use of square brackets as they are currently written only adds to the confusion. To be blunt again, it looks as though whoever wrote the formula was not aware of the fact that a quantifier like $$ \forall $$ needs to be bracketed if it is within the scope of another operator (like implication in this case). (4) Although I don't think it is necessary, is also common practice to write a 2nd-order variable, written here as "P", as the capitalized Greek letter Phi. I will not however add that change here, but I do highly suggest considering it.

Given the above, I suggest the following correction: $$ \forall P \Big [\big [P(0) \land \forall k \in \mathbb{N} \big (Pk \Rightarrow P(k+1)\big ) \big ] \Rightarrow \big [\forall n \in \mathbb{N}  (Pn)\big ]\Big ] $$

If we wanted to be very persnickety about brackets we would also add them to each of the conjuncts in the antecedent of the primary conditional. I left that out however, since further brackets would probably do more harm than good, and no ambiguity presents itself as is. As such, I believe that this version above all parses much easier. It allows one to immediately determine the primary operator (the 2nd-order quantifier) because of the large square brackets. Furthermore, one can immediately see the internal implication whose antecedent is the conditions that one needs to prove in an inductive proof, and whose consequent is the desired conclusion that the property in question holds for all natural numbers.

Let me know what you think, --æntity 07:41, 11 February 2011 (UTC)


 * The above doesn't look correct. Nor does the exposition in the article -- we're missing some formalization (it isn't very hard, really). [This isn't supposed to be 2nd order induction is it? See more below re the treatment in Boolos-Burgess-Jeffrey 2002 re 2nd order induction]. Frankly, I'd recommend/prefer we stick with some formalization for which we can cite and go back to, rather than trying to invent some nomenclature. For example, in his chapter on formal number theory, Kleene 1967 writes first-order induction in a highly formalized manner, this way (notice that there's no lead "for all A":
 * "For Axiom schema 13, x is any variable, A(x) is any formula, and A(0), A(x’) are the results of substitution 0, x’ for the free occurrences of x in A(x). (These substitutions are automatically free)" (p. 206).
 * 13. A(0) & ∀x(A(x) ⊃ A(x’)) ⊃ A(x)
 * He has previously allowed only concatenations of the following symbols ~, ⊃, &, ⋁, ¬, ∀x, =, +, ∙, ', 0, a, b, c, . . . , undefined, where and 0 defined as a "term" together with the variables a, b, c, . . ., [Here follows a recursive definition of (r)', (r)+(s), (r)∙(s)) ].  The three dots . . . are an abbreviation for "and so forth", the commas and any periods are also punctuation marks (but not the dot, which is used by Kleene for "multiplication"). The strange undefined is a subscript to be applied to variables.


 * In Boolos-Burgess-Jeffrey 2002:214 we see almost the same treatment with the use of an arrow-operator in place of the horseshoe, but notice some subtleties (again, we'd need to study his symbol-choice and syntax):
 * (A(0) & ∀x(A(x) → A(x’))) → ∀xA(x)


 * RE B-B-J 2002:283 treatment of 2nd order logic, we see this formulation (I didn't look at their syntax, I just cc'd their formula)
 * ∀X((X0 & ∀x(Xx) → Xx’)) → ∀xXx)
 * The point there is, to do the job right, you have to establish the symbols and their syntax at the outset (at least give the reader some notion of what the syntax is.) And then give an interpretation. As noted in Kleene 1967:207, the intended interpretation of 0, 0', 0' ', 0' ' ', . . . is the counting numbers N, and ' is intended to be interpreted as +1, etc, etc. Bill Wvbailey (talk) 19:10, 11 February 2011 (UTC)


 * Thanks for the quick response! I definitely agree that we should stick with a formalization that we can cite, and BBJ is definitely the place for authority on the matter.  Also, just curious, are you using the 4th or 5th edition?  Right now I only have the 4th edition on hand, but can check the 5th tomorrow or the day after to see if there are any differences/changes.


 * Fourth edition. Caution is advised re this edition: on page 43 "The Productivy Function", there are some bad errors (j should be k). I checked back and these crept in because BBJ changed this treatment from the 1st-3rd editions (BB w/o J) but kept their original drawings. The BBJ treatment is inferior and I can't for the life of me see why they changed it. The errata is so bad I began to keep a list (I was up to 8 not including this one). Plus the sources are lousy. I hope the 5th edition cleans the messes up.


 * Anyway, the formulation as I gave it is correct --- its syntax does mirror the BBJ formulation on 283, but which you did not write correctly. On BBJ 4th edition 2002:283, the formula for induction is not ∀X((X0 & ∀x(Xx) → Xx’)) → ∀xXx), as you wrote, but rather appears as  ∀X((X0 & ∀x(Xx → Xx’)) → ∀xXx).  I believe you mistakenly added a parentheses to the right of the 2nd x (parsing from left to right).


 * You're correct, I added an extra ).


 * Nevertheless, the logical syntax of a mathematical (arithmetic) induction is represented by a conditional with an antecedent that contains TWO conditions: first the so-called base clause (the left conjunct), which is that the property in question holds for the numeral 0. The second condition is a conditional (the right conjunct), which states that if the property in question holds for a given value, then it also holds for that value + 1.  If one succeeds in demonstrating both of these facts (i.e., both the left and right conjunct), then one can detach the consequent (via modus ponens for example), and conclude that the property in question holds for ALL (natural) numbers.  Thus the logic that mirrors this piece of reasoning will look like this: [A & (B -> C)] -> [D]


 * Both the formulation I initially gave and the BBJ formulation's syntax does mirror that form. Notice also that my complaints about the formula currently being displayed are not present in the BBJ formulation --- the quantifiers do NOT have parentheses around them, the variables don't have parentheses around them, etc. --- as is customary with modern logical notation.


 * Yeah I wondered about the missing parentheses. I'd have preferred the parentheses myself. I can't find in my sources an agreed-upon treatment of 2nd order logic. Here's some examples of 2nd order language (to express mathematical induction examples) from Rogers 1987:385 (reprint of 1967). Here we're seeing the mix of brackets and parentheses:
 * (∀Â)0 ∈ Â & (∀a)[a ∈ Â ⇒ a+1 ∈ Â ⇒ (∀a)[a ∈ Â]]
 * (∀p)p(0)=1 & (∀a)[p(a) =1 ⇒ p(a+1) = 1 ⇒ (∀a)[p(a) = 1]]


 * As for exact syntactic differences between the BBJ formulation and my suggested one, the one that I gave just specifies the domain being used for the first-order universal quantification (i.e., k is an element of the natural numbers, etc.). I wanted to change the formalization as little as possible from the original and just clarify its syntax (mostly getting rid of excessive parentheses) in order to make parsing easier.  Compare the BBJ formulation with the one currently being displayed and you should notice the difference.


 * But what's going on here needs to be clearly spelled out for the reader. Essentially, you're taking a formal axiom (just a string of symbols) and giving it an "arithmetic interpretation" when you use N (the set of counting numbers and that very special symbol 0) (see the quote below from Hamilton 1978). If it were up to me, I'd stick with the successor (every book that discusses this -- Hamilton 1978, Robbin 1997, BBJ, Crossley et. al. 1972, Kleene 1978, Rogers 1978). See my suggestions at the end re a very-formal expression of the axiom and then recasting it into the "arithmetic interpretation".


 * As for arithmetic induction (or otherwise), as I understand it one either needs 2nd-order logic to express it, or an axiom schema. Regardless, again the displayed formula was already in 2nd-order logic, and is glossed as such in the article.  Contextually then, it seems a moot point whether we should specify the formula in 2nd-order logic, or as an axiom schema, etc.  Or rather, that is a different discussion altogether, and could simply be settled by providing two separate logical specifications --- i.e., one in 2nd-order logic and one with axiom schemas.


 * My understanding is the same. I make a proposal at the end. We need 2nd order logic to express Peano's full induction. If we restrict ourselves to 1st order logic/language then we have to use an induction schema. Hamilton's treatment of this is wonderfully clear (at least I can understand what's going on!):
 * "...because in [the arithmetic interpretation] N [cf p. 61] we are restricted to the use of the first order language LN, the axiom [Hamilton writes his (N7) as (A(0) & ((∀xsub>1 (A(xsub>1 )) → A(xsub>1 ’))) → ∀xsub>1 A(xsub>1 ))] cannot be as strong or as inclusive Peano's fifth postulate. The reason is that Peano's fifth postulate contains a second order quantifier 'for any set A of natural numbers', which cannot be expressed in our first order language. The best we can do is use the notion of axiom scheme, so that we effectively have a quantifier in 'for every wf. A(x1) in which xsub>1 occurs free'. Note that such a wf. A(x1) determines a set in any interpretation . . . If we think in the context of a model of N, therefore, each instance of the axiom scheme (N7) corresponds to one particular set. However, there is still an essential difference. The instances of axiom scheme (N7) form a countable set of wfs. of LN. Peano's fifth postulate is a statement about all sets of natural numbers, and the collection of all these is uncountable. Thus (N7) is a much restricted form of the Induction Principle . . .."(p. 114 of A. G. Hamilton 1978 Logic For Mathematicians, Cambridge University Press, Cambrdige UK, ISBN: 0 521 21838.)


 * Returning to the point however, and agreeing with your initial suggestion that we should have something we can cite, I think adoption of the BBJ formulation is probably the smartest move at this juncture. --&aelig;ntity 02:52, 12 February 2011 (UTC) — Preceding unsigned comment added by Aentity (talk • contribs)


 * I think I've lost perspective, having looked down into a can of worms (or into Pandora's box). After a bit of a survey . . . I'd say go with whatever source you think is right and good (as in formally correct). I'm now left with the uncomfortable feeling that the article is pretty deficient. It should be casting Peano's orginal axiom (2nd order) as both the orginal second order axiom (using a formal set of symbols including the successor symbol) and then casting it into its limited form in the first order language via a "schema" (also using a formal set of symbols including the successor symbol). It then should adopt an "arithmetic interpretation" and recast both into the form (using numbers) similar to what you're proposing. In other words there'd be four expressions: two axiom-expressions and two first-order axiom schema-expressions. Bill Wvbailey (talk) 16:57, 12 February 2011 (UTC)

Deductive and not inductive
Dear Wikipedia contribuitors,

receive a respectful greeting. I would like to know whether any of you can add information about why mathematical induction is a form of deductive reasoning and not inductive one. Some clear explnation and supporting sources would be just perfect. Thank you so much in advance!George Rodney Maruri Game (talk) 21:26, 17 March 2011 (UTC)

I made this 5 years ago in Bahasa Indonesia http://www.oocities.org/wisnuops/MyDocs/InduksiMatematika_BenarkahAda.pdf Hehehe.... But I also found this http://www.earlham.edu/~peters/courses/logsys/math-ind.htm in English Wisnuops (talk) 05:14, 26 March 2011 (UTC)


 * So kind from you. Thank you very much!


 * George Rodney Maruri Game (talk) 21:18, 28 April 2011 (UTC)

Equivalent Formulations
It appears to me that the principle of induction is equivalent to the statement that $$\N$$ is the minimal element of the collection of all sets satisfying the other Peano's Axioms, ordered by inclusion. (That is, the first axioms detail a list of elements which are in $$\N$$, and the induction principle simply states that there are no others). I'm fairly certain that I have pretty simple proofs of the equivalence of these statements, but I'm skeptical because I don't see it formulated in that way in any reliable sources. Can anyone attest to whether or not these statements are equivalent? 141.213.154.122 (talk) 16:15, 1 August 2011 (UTC)

Paragraph of Complete Induction
Can anyone check if there is a mistake in the following paragraph quoted from the Complete Induction part of the article.

"This generalization, complete induction, can be derived from the ordinary mathematical induction described above. Suppose P(n) is the statement that we intend to prove by complete induction. Let Q(n) mean P(m) holds for all m such that 0 ≤ m ≤ n. Apply mathematical induction to Q(n). Since Q(0) is just P(0), we have the base case. Now suppose Q(n) is given and we wish to show Q(n+1). Notice that Q(n) is the same as P(0) and P(1) and ... and P(n). The hypothesis of complete induction tells us that this implies P(n+1). If we add P(n+1) to Q(n), we get P(0) and P(1) and ... and P(n) and P(n+1), which is just Q(n+1). So using mathematical induction, we get that Q(n) holds for all natural numbers n. But Q(n) implies P(n), so we have the conclusion of strong induction, namely that P(n) holds for all natural numbers n."

From what I read, I believe that it is the converse that has been shown:If statement P(n) can be proved for all n ≥ n0 by complete induction, it can be proved by ordinary induction. — Preceding unsigned comment added by Ddanndt (talk • contribs) 16:15, 22 September 2011 (UTC)
 * Wikipedia doesn't do original research, and right now neither your position nor the current article has a source. Ian.thomson (talk) 19:27, 22 September 2011 (UTC)

Thanks Ian for your contribution duh !!! If you can read english, I'm just asking Wikipedians to come up with any quotes/info to confirm/refute any mistake made. I'm not asking for research. Thank you and stay off if you don't have any answer... — Preceding unsigned comment added by Ddanndt (talk • contribs) 20:17, 22 September 2011 (UTC)
 * We do bring out the banhammer for incivility, treat other editors (including me) with respect. I have given the answer as far as Wikipedia's policy on sticking to verifiable information is concerned.  That part of the article does not cite a reliable source, your statement does not cite a source either.  From the perspective of WP:V, both statements are original research, which is not accepted here.  Ian.thomson (talk) 21:06, 22 September 2011 (UTC)

---
 * The question is valid. I could not find this word-usage in my Kleene 1952, so I did a google search and it coughed up "course of values" induction as well as the example given here (yes: the example just "resets" the basis. There's nothing particulary profound in it, really, but see below . . . .) Here's the quote from Kleene 1952:22 relative to his definition of "course-of-values" induction (if someone doesn't know who Stephen Kleene is, for shame: ''Introduction to Metamathematics, North-Holland Publishing Company, Amsterdam, NY, ISBN 0 7204 2103 9):
 * "Sometimes, in order to caryy through the induction step, it is necessary to assume as hypothesis of the induction, not simply P(n) but that P(m) for all m <,= n. The reader may satisfy himself that the induction principle is valid in this modification, called a course-of-values induction(Kleene 1952:22).
 * Altho I don't have a readily-available reference for what Bertrand Russell would have called "matrix induction" or "induction over a matrix", this notion can be applied with some benefit when the function, such as arithmetic addition Σ(m, n) or arithmetic multiplication Π(m, n) has two variables. The idea is to freeze m to a parameter p (say the number "3"), then prove that Σ(3, n) works for all n. Clearly this has "reset" the basis to Σ(3, 0) = 3, and then the induction on the single variable n goes on from there. Bill Wvbailey (talk) 23:17, 22 September 2011 (UTC)


 * A bit more on this: Here is Kleene's formal definition and proof verbatim [here ⊃ is logical "implies" cf p. 69, x' is the successor of x cf p. 70]:
 * "In the first schema *162a, the expressions A(0) and ∀x[∀y(y ≤ x ⊃ A(y)) ⊃ A(x')] formalize the basis and induction step, respectively; in the more compact form *162b, the two are brought together in the single expression ∀x[∀y(y < x ⊃ A(y)) ⊃ A(x)]:
 * " *162a. ⊢ A(0) & ∀x[∀y(y ≤ x ⊃ A(y)) ⊃ A(x')] ⊃ A(x)]].
 * " *162b. ⊢ ∀x[∀y(y < x ⊃ A(y)) ⊃ A(x)] ⊃ A(x)
 * (Courses of values induction)
 * " PROOFS. *162a. Assume A(0) & ∀x[∀y(y ≤ x ⊃ A(y)) ⊃ A(x')], deduce ∀y(y ≤ x ⊃ A(y)) by simple induction on x, and infer A(x) by ∀-elim.
 * "Sometimes inductions require a double basis; i.e. we establish as the basis A(0) and A(1), and then for the induction step infer A(x' ') from the two preceding cases A(x) and A(x'). This can be treated formally as a course-of-values induction, using cases acording as x'=1 or x'>1 under the induction step, or we can make it into a simple induction by using A(x) & A(x') as the induction formula. This device of using a conjunction as the induction proposition applies similarly to induction from a k-fold basis for any fixed k ≥ 2, and also to the inductive proof of several propositions simultaneously'." (all quotes Kleene 1952:193)
 * There's quite a bit more to be found in Kleene cf the index p. 541: course-of-values: function 231, 291; induction 22, 193; recursion 231, 237, 236. Uses of it are sprinkled throughout the text. Bill Wvbailey (talk) 14:58, 23 September 2011 (UTC)

Complete Induction, Part II: History
This is in response to a question by User:Jowa fan. I've found historical sources re the usage of the words "complete induction" and "course-of-values".

Complete induction appears in Richard Dedekind's The Nature and Meaning of Numbers, the text of which can be found either at www.books.google.com, or from an (identical) printed version:
 * Richard Dedkind, ca 1858 & 1887/1893, Essays on the Theory of Numbers: Continuity and Irrational Numbers, II. The Nature and Meaning of Numbers, translated by Wooster Woodruff Beman; Dover Publications, Inc., Mineola NY, 1963, republication of the Open Court Publishing Company version in 1901, ISBN 0-486-21010-3.

I actually looked at the German version written in German script and found the word vollständigen used where the English word "complete" is used (the translation apparently is "complete"). Plus the German Wikipedia has an article on Vollständige Induktion. The text of Dedekind of interest here is ''II. The Nature and Meaning of Numbers''. Here's the intuitive synopsis leading to his theorems of "complete induction": In part VI Dedekind defines a "simply infinite" system N in what is now the orthodox method similar to the Peano axioms. This is based on the notions of a "transformation" ϕ of a system S, which is a "law" [function] so that any s contained in S is transformed into ϕ(s); these are abbreviated as s' , for example. In para. 36 he defines the transformation of the system S into itself, i.e. ϕ(S) ℈ S. He defines a "chain" as K'  ℈ K, i.e. ϕ(K) ℈ K and his most-important Theorem 39: "The transform K'  of a chain K is a chain, i.e. (K' )' ℈ K' . Based on all this he expresses his two "Theorems of complete induction":


 * "Theorem 59. Theorem of complete induction. In order to show that the chain A0 is part of any system Σ -- be this later part of S or not -- it is sufficent to show,
 * "ρ. that A ℈ Σ, [i.e. A is "a part of" Σ]
 * "σ. that the transform of every common element of A0 and Σ is likewise element of Σ."
 * "the preceding theorem, as will be shon later, forms the scientific basis for the form of demonstration known by the name of complete induction (the inference from n to n+1; it can also be state in the following manner [here he uses the idea of a "property" and its inheritance altho he doesn't use that word, cf Russell 1919]. (Dedekind pages 60-62)


 * "Theorem 80. Theorem of complete induction (inference from n to n' ). In order to show that a theorm holds for all numbers n of a chain mo, it is sufficient to show,
 * "ρ. that it holds for n=m, and
 * "σ. that from the validity of the theorem for a number n of the chain mo, its validity for the following number n'  always follows.
 * "This results immediately from the more general theorem (59) or (60). The most frequently occurring case is where m = 1 and therefore mo is the complte number-series N. [pages 80-81. "n'  is a "transform" of element n: "the transform n'  of a number n is also called the number following n (p. 69)]]

[Off-topic aside: Immediately the pathelogical example of the Collatz conjecture for the looping ϕ(N) = IF(ISODD(N),(5*N+1)/2,N/2) given basis N=5, leapt to mind (as opposed to the diverging ϕ(N) with basis N=7):


 * 5, 13, 33, 83, 208, 104, 52, 26, 13, 33, 83, 208, 104, 52, repeating ad infinitum]

Course-of-values induction: Moving right along: I found what apparently is the source of the "course-of-values" notion that I remember from Bertrand Russell. This notion comes to us via Russell 1903 from Gottlob Frege 1902:
 * Gottlob Frege, ca 1902 The Basic Laws of Arithmetic: Exposition of the System, Translated and Edited by Montgomery Furth, University of California Press, Berkeley CA, 1964, 1982 edition ISBN 0-520-04761-3

In Furth's introductory notes "Chapter 6: Function and course-of-values. Concept and Extension" he summarises the concept of "course-of-values" as follows:
 * "The notion of a "course-of-values" is fairly easily related to more recent notations as approximately a function-in-extension . . . Thus the name " ε' (ε + 7) " would denote a class containing such ordered couples as 0;7, 1;8, 2;9." (page xxxvii)

This definition is what I associate with Russell in Principia Mathematica as opposed to his 1903. In his 1903:245-246 he first summarizes Dedekind as I did above, mentions Dedekind's theorem 59, and calls this "the generalized form of mathematical induction". But his notion of "course-of-values" that he uses in PM in regards to his definition of a "matrix" and his axiom of reducibility, is that of Frege.

At least historically it's as if Kleene has confounded the notion of "course-of-values" aka "function-in-extension" with "generalized" or "complete" induction. Am not sure where to go from here. The fact that the German Wikipedia uses the Dedekindian "complete" induction indicates that this might be the preferred moniker. Bill Wvbailey (talk) 22:34, 4 December 2011 (UTC)
 * Thanks for following this up! I think what's currently in the article is based on a mistranslation.  Certainly the German term Vollständige Induktion described what's just called mathematical induction in English, and the German Wikipedia page Vollständige Induktion uses the name Induktion mit mehreren Vorgängern ("induction with multiple predecessors") to describe what's called "complete induction" on the page here.  As for course-of-values, I'm now properly confused.  I propose that we retain the term strong induction–I'm confident that there are many English language textbooks using that term, if a source is required–and remover the other names.  (What's I'd really like to see is a source for the proof that strong induction is equivalent to ordinary induction.  Most books just state it without proof, and it's not hard to verify that it's true, but it would still be nice to have a reference for it.) Jowa fan (talk) 00:04, 5 December 2011 (UTC)

"Strong" moniker: Do you have any sources that actually use the moniker "strong induction". II did find in Sipser this very-unsatisfying little claim that "having the stronger induction hypothesis that P(j) is true for every j ≤ i is useful. The induction proof still works because, when we want to prove that P(i + 1) is true we have already proved that P(j) is true for every j ≤ i" (page 23). In the previous paragraph he has said that the basis can start at any arbitrary number "b": "In that case the induction proof shows that P(k) is true for every k that is at least b (page 23). So in this sense the added demonstration that P holds "all the way down" certainly seems 'stronger' than when we start at b and don't know what happens when k < b. My inquiring mind wants to know: is there a general principle by which we proceed downward from n to the basis? Why not just start at the basis and work up?
 * Michael Sipser 2006 Introduction to the Theory of Computation, Second Edition, Thomson Course Technology, Boston MA, ISBN-13: 978-0-534-95097-2.

Course of values moniker: After more reading of Dedekind I'm thinking that maybe "course-of-values" is a better moniker than "complete", after all. For example, we might want to start at n and assert that it and all m < n satisfies property P. We might be able to verify that 0 satisfies property P, but do we know that n-3, or n-4, or some odd-ball m in n-m satisfies property P until we try it for every m between n and 0, too? The cranking through all the numbers m in P(n) is definitely a "course-of-values". Or is there a general way to prove it all the way down, by some way that may or may not yield to garden-variety induction? Won't there be situations when we have to "test all the cases all the way down" to be sure?

The "complete induction" moniker: As to what is going on with the "complete" moniker, after reading a bit more Dedekind I think you're correct that his two theorems (59) and (80) are actually "garden-variety" mathematical induction. Specifically, in order to get from (59) to (80) Dedekind first has to define "infinite" and "finite" systems (59), then prove that infinite systems exist (66), define the notion of "1" (modern method uses "0") in a "simply infinite system" N that has to satisfy 4 conditions (similar to the conditions that "0" satisfies in modern set theory (71)), define "1" as "the base number" and define the notion of n'  as n'  "following" n, prove that N is the only number-chain containing the base-number 1 (75-79) and lastly prove Theorem 80 in a single line.

Dedekind's proofs of (59) and (80) don't require the notion of ≤ that is required for and B-B-J's "complete" induction (see below) and Kleene's "course-of-values" induction (on page 22 Kleene leaves this to the reader to verify). In fact Dedekind does not introduce the notion of "less than" until (89), as a definition after proving theorems (81-88). After looking over the rest of Dedekind I couldn't find what we are looking for. So I have to agree with you that there seems to be a misappropriation from the German Vollständige Induktion (i.e. garden-variety mathematical induction) of the moniker "complete"; in fact "complete" would seem to mean that the induction starts at "1" (or "0") and "is complete up through all the numbers between "0" and m ≤ n.

A proof-sketch of "complete induction": (I hope I haven't misinterpreted the following from B-B-J). I suspect we're stuck with having to at least cite the moniker "complete induction"; my cc of B-B-J refers to it as "complete induction" on page 213: Here's the B-B-J quote in full because it purports to provide a proof-sketch of how to derive "complete induction" from the garden-variety; note that it's assuming garden-variety induction as an axiom together with 2 of the 9 axioms of weak arithmetic system Q which it has to use:
 * George S. Boolos, John P. Burgess, Richard C. Jeffrey, 2002, Computability and Logic Fourth Edition, Cambridge University Press, Cambridge UK, ISBN 0-521-00758-5 (pb.),
 * "Closely related to the principle of mathematical induction as stated above is the prinicple of complete induction, according to which we can prove that every number has some property P by proving that zero has P, and proving that, assuming every number ≤x has P, then the successor of x also has P. Indeed, complete induction for a property P follows on applying mathematical induction to the related property 'every number ≤x has P ', using the facts (Q4) [ x + y' = (x + y)' ] and (Q5) [ x*0 = 0 ] that the only numbers ≤x'  are the numbers ≤x and x'  itself." (p. 213)

In the paragraphs before their "axioms of minimal arithmetic" Q i.e. axioms Q(1) - Q(9) on pages 207-208 that this Q is "not strong enough to prove major theorems of number theory . . . [but] strong enough to prove all correct ∃- rudimentary sentences. . . . Our main theorems . . . apply to any theory T that contains Q, and since Q is weak, the theorems are correspondingly strong" (p. 207-208); they then graft their "mathematical induction schema" as an axiom onto the list to create their P or "Peano arithmetic" which is adequate for number theory.

Is this B-B-J proof-sketch what you're after? I don't find it satisfying at all (too handwavy/sketchy, too obscured by its requirements for the arithmetic axioms Q).

Conclusions: We're going to have to footnote with sourcing all of these monikers, confusing as they may be. We have sources for two of them, so far. And either there's something I'm missing, or there's been a misappropriation of a moniker, as you propose. But we're stuck with the usage as it is, altho we can relegate the renegade to the footnote cellar. Bill Wvbailey (talk) 17:22, 5 December 2011 (UTC)


 * Yes, I'd like to see something like the B-B-J proof but expressed a little more clearly. I'm sure I've seen "strong induction" in several books, typically first-year undergraduate texts with "discrete mathematics" in their name, but would have to make a trip to the library to find a specific source.  It's not likely to happen this week.  It seems to me that the name "course-of-values" is archaic, unless you have a source published in the last 30 years that uses it. Jowa fan (talk) 00:32, 6 December 2011 (UTC)

--

Strong induction has modern usage: I typed strong induction into www.books.google.com and got a few hits. The book (below) explains the matter to my statisfaction (never mind the factoid that it prefers "complete" as opposed to "strong"):
 * " *Because this form of induction employs such a strong assumption, the Principle of Mathematical Induction is sometimes referred to as "weak induction" (footnote * on page 114 to the subsection titled "Principle of Complete Induction (PCI)" in subchapter 2.5 titled "Equivalent Forms of Induction")

The book is: I'd suggest that if we decide that "strong" is the way to go a footnote needs to be added to the point that it is not "stronger" in the sense of "better", its adoption actually weakens the case; the weaker the theory the stronger a proved outcome, Occam's razor applying here. S-E-A give an example similar to the following (theirs being 1 + 2 + 3 + 4 + . . . + 2*n-1 = n2) but I've simplified it to the one I first saw in about 11th grade but never really understood, 'til now:
 * Douglas Smith, Maurice Eggen, Richard St. Andre, 2010, A Transition to Advanced Mathematics Seventh Edition, Brooks/Cole, Cengage Learning, Boston MA, ISBN-13: 978-0-495-56202.
 * 1 + 2 + 3 + 4 + . . . + k = ?

Altho S-E-A are not clear about (in general) how an hypothesized formula comes about (magical thinking not allowed), in this instance I derived it by graphical inspection of a plot of little squares (looks like a triangle, applied the formula 1/2*b*h, and adjusted for the k/2 extra squares = k*k/2 + k/2, yielding the simpler form):
 * 1 + 2 + 3 + 4 + . . . + k = k*(k+1)/2

So let's hypothesize that for any n, the summation will yield n*(n+1)/2. We start our induction with what we know to be true, i.e. at k: k*(k+1)/2, k being a very finite number (like 5, yielding the sum=15). Then we add k+1 to it (sum = 21, and we see that indeed 6*7/2 = 21). So we do this with algebra:
 * k*(k+1)/2 + (k+1) = (k + 1)*(k/2 + 1) = (k + 1)*(k + 2)/2 = (k+1)*((k+1) + 1)

To get to our hypothesized equation we reset the equation (k+1)*((k+1)+1)/2 to our hypothesized equation by letting n = (k+1); indeed this yields our hypothesized formula:
 * n*(n + 1)/2. Q.E.D.

Addendum: I found an source for the use of inductive reasoning (i.e. from specific cases to a probable general case) such as the method described immediately above, in Howard Eves 1990 Foundations and Fundamental Concepts of Mathematics: Third Edition, Dover Publications, Inc, Mineola, NY, ISBN: 0-486-69609-X (pbk.):
 * "In such reasoning [that the sun will rise tomorrow], the conclusion is not one that must follow but is only one that probably follows. Reasoning of this sort is clearly inadmissible in a logical development. Induction in the usual sense often is employed in order to guess a general proposition P(n) in the first place -- that is P(n) may be conjectured from observing several instances, like P(1), P(2), P(3), of P(n) -- but the rigid establishment of P(n) by the so-called priniciple of mathematical induction . . . is, of course, a purely deductive procedure" (page 189).

Eves doesn't torture us with names for the three alternate types of Mathematical induction he describes: the first one is "the principle of the smallest number", the second one is, "a second principle of mathematical induction" and the third is "a third principle of mathematical induction" (!); both of these later forms are similar to the "strong" form under discussion. He offers proofs of both by means of employing his Theorem 17 "principle of the smallest number". Eve's treatment begins with his "postulate of finite induction", the 10th of 10: "(Postulate 10) If M is a set of natural numbers such that (1) M contains the natural number 1, (2) M contains the natural number k+1 whenever it contains the natural number k, then M contains all the natural numbers" (p. 184). From this he proves his "Theorem 1: the principle of mathematical induction". (Alternately he uses a Peano axiom set (p. 191) with the same "postulate of finite induction"). BillWvbailey (talk) 22:34, 11 December 2011 (UTC)

Conclusion: I'm okay with "strong" as long as it's discussed that this doesn't mean strong is better. I like the example. I could do a drawing of it to make the point; I think high-school kids do encounter this particular example. Will continue to work on the "course of values" issue. Bill Wvbailey (talk) 21:35, 6 December 2011 (UTC)

--

Course-of-values induction has modern usage: www.books.google.com gives 5790 hits on [in-quotations] "course-of-values induction", and 13900 hits for "complete induction", mathematics. The later qualification is necessary because "complete induction" has to do with hypnosis, too. Here's what Chausey 2006 has to say about "course-of-values induction":
 * "This principle is often called the "principle of strong induction" or the "principle of complete induction". However, it is no "stronger" than the original IP [Induction Principle], as will soon be proved. Also, the term "complete" does not seem to have sufficient mnemonic power here. I prefer the name, "course of values induction." When using CVI, one makes the induction hypothesis that φ(i) is true for all i < k, and then one tries to prove φ(k). This is equivalent to assuming
 * φ(0) & φ(1) & . . . & φ(k-1).
 * "From this "course-of-values" one then tries to prove φ(k). If this is accomplished then one infers that for all n ε Nat, φ(n)." (p. 303)

Chausey then provides a reductio proof using well-ordering. I don't like this proof nor his discussion; for the reason, see Collatz example, below.
 * Robert L. Causey, 2006, Logic, Sets and Recursion Second Edition, Jones and Barthlett Publishers Inc, Sudbury MA, ISBN-13: 978-0-7637-3784-9.

Another interesting application of "course-of-values induction" is John V. Tucker and Jeffrey L Zucker, 1988, "Program Correctness over Abstract Data Types, with Error semantics", Chapter 4.5 "Course-of-values Induciton" (p. 155). Also McNaugton 1993, Huth and Ryan 2006, etc etc.

Misapplication of complete/strong/course-of-values Induction: This form of induction seems very susceptible to misuse. Causey hints at it (with his sequence of logical &, above) but doesn't drive the point home. Somehow, every case from 0 (or 1) to k-1 must be demonstrated as true, perhaps by use of an exhaustive course-of-values, extensional, one-by-one demonstration. If not, mistakes will be made: Here's an example from the Collatz conjecture using -3 as coefficient i.e. =IF(ISODD(-3*N+1)/2, N/2). Suppose someone were to (naively) assert that for N≤15 the sequence always arrives at 1. Maybe they tried N=1, 2, 3, 4, 5, 6, 7 and decided that was good enough demonstration. Then they demonstrate that the sequence does arrive at 1 with seed 15 (as indeed it does). And they derive from this that it arrives at 1 with seeds N=16 and N=17 (i.e. the case k+1, k+2; as indeed it does). From this they then conclude that the Collatz sequence always converges for any N. Well they would be quite wrong. For the singular, oddball case N=13, the only N between 1 and 15 inclusive (actually the only bad actor between 1 and 22 inclusive), it does not converge; it gets into a loop (also a loop when N=23). Russell 1903 mentions some misuses of mathematical induction (p. 192, p. 196).

Conclusion: Chausey demonstrates the confusion I've had over the use of "strong", even responding to the word as I did: "Why should it be strong as in "more powerful"?" Unfortunately we'll have to respect these various monikers; there doesn't appear to be a "best" one. BillWvbailey (talk) 17:54, 7 December 2011 (UTC)

Paragraph of Complete Induction (from RfC request board)
Can anyone check if there is a mistake in the following paragraph quoted from the Complete Induction part of the article.

"This generalization, complete induction, can be derived from the ordinary mathematical induction described above. Suppose P(n) is the statement that we intend to prove by complete induction. Let Q(n) mean P(m) holds for all m such that 0 ≤ m ≤ n. Apply mathematical induction to Q(n). Since Q(0) is just P(0), we have the base case. Now suppose Q(n) is given and we wish to show Q(n+1). Notice that Q(n) is the same as P(0) and P(1) and ... and P(n). The hypothesis of complete induction tells us that this implies P(n+1). If we add P(n+1) to Q(n), we get P(0) and P(1) and ... and P(n) and P(n+1), which is just Q(n+1). So using mathematical induction, we get that Q(n) holds for all natural numbers n. But Q(n) implies P(n), so we have the conclusion of strong induction, namely that P(n) holds for all natural numbers n."

From what I read, I believe that it is the converse that has been shown:If statement P(n) can be proved for all n ≥ n0 by complete induction, it can be proved by ordinary induction. — Preceding unsigned comment added by Ddanndt (talk • contribs)
 * In the future, please state which article you're refering to. I can only guess that you're talking about the articleMathematical induction because that's the only other place you've editted.  Wikipedia is supposed to go off ofsources, not original research.  Right now, neither your position nor the current article has a source saying anything either way.  Ian.thomson (talk) 19:27, 22 September 2011 (UTC)

Then remove it if then if it's veracity can't be confirmed??? Ddanndt (talk) 20:23, 22 September 2011 (UTC)
 * Ideally, it would be better to find a reliable source which clarifies which is the case. A college text book orGoogle books may have something. Ian.thomson (talk) 21:09, 22 September 2011 (UTC)

Thread moved by User:Coastside on 08:47, 15 June 2012 (UTC) from WP:Requests_for_comment/Request_board'' If issues is still open, please consider:
 * Posting comment on WP:WikiProject_Mathematics or WP:WikiProject_Mathematics/Reference_resources
 * Requst for comment in math/science topic:

Coastside (talk) 08:47, 15 June 2012 (UTC)

Natural numbers only?
To the best of my knowledge, I thought mathematical induction applied to all real numbers. Is there a specific type of mathematical proof that involves the entire set of real numbers (maybe even imaginary)? Eddievhfan1984 (talk) 01:22, 17 April 2013 (UTC)

Explain relation between Peano's induction axiom and exclusion of nonstandard models in article
I'd like to suggest to explain the need of the induction axiom to exclude nonstandard models, as already discussed above, in the article. There could be a second domino picture (again by User:Pokipsy76, who provided the -great- first one?), showing a model with junk elements; see the image for an ugly sketch of the domino footprints (adding a single junk element that is its own successor, as suggested by User:Lambiam 22:18, 26 February 2008, is more difficult to illustrate).

If the first domino falls, then Moreover, the induction axiom holds in a model if, and only if, the model doesn't contain any "junk", i.e. elements unreachable from 0 by successive incrementation. To rule-out nonstandard models containing "junk" is the main purpose of the induction axiom.
 * in the standard model, all dominoes will fall, as explained in the article;
 * in the nonstandard model, the extra ("junk") dominoes won't fall.

Jochen Burghardt (talk) 20:56, 21 May 2013 (UTC)


 * I've achieved to produce the suggested image myself (see above). If there are no objections, I'd like to insert it with appropriate text (the current caption text may not be optimal) into the article. - Jochen Burghardt (talk) 17:01, 30 October 2013 (UTC)
 * Well, there's some good stuff here if the text is worded carefully.
 * But I see a couple of problems with your proposed additions in the way that you have worded it. First of all, I'm not particularly favorable towards identifying a model-theoretic consideration as the "main purpose" of an axiom.  For someone who takes it for granted that the natural numbers are a real, unique structure, directly accessible to intuition rather than defined by axioms, the "purpose" of the axiom is simply that it helps you prove things about the intended structure (and is intuitively obviously true in that structure, so the things that you prove will also be true).  While this view is not the only one, it's an important enough one that anything we say about the "purpose" of an axiom should not be inconsistent with it.
 * Then there's another point, which is that, whatever the "purpose", the first-order axiom does not in fact succeed in excluding all nonstandard models. (The second-order axiom does succeed in excluding them, when interpreted in full second-order logic, but there is no complete and sound proof system for second-order logic.) --Trovatore (talk) 19:44, 30 October 2013 (UTC)
 * Agreeing to Trovatore, in order not to burden this article with model-theoretic stuff, I now added the picture to Peano_axioms, where I think it fits nicely. However, the current caption text is still rather clumsy, taking more space than the thumbnail image itself in my browser. Maybe a native English speaker could improve it. (The issue of 2nd-order axiom vs. 1st-order axioms is already discussed under Peano_axioms, so it need not be mentiopned in the caption; another advantage of the new site.) - Jochen Burghardt (talk) 13:50, 8 November 2013 (UTC)

Citation for Polya's equal colors
The page http://www.proofwiki.org/wiki/All_Horses_are_the_Same_Colour#History says: "This paradox was originally raised by George Pólya, and appears in his 1954 work Induction and Analogy in Mathematics." This book is available via https://archive.org/details/Induction_And_Analogy_In_Mathematics_1_], and No.17 (p.120 = p.140 in pdf) of "Examples and Comments on Chapter VII" (starting at p.116 = p.136 in pdf) is about a proof of "Any n girls have eyes of the same color". I didn't find the horse version there. - Jochen Burghardt (talk) 14:47, 6 January 2014 (UTC)

cousin - definition
I recall a bit of mathematical folklore regarding an inductive definition of a cousin - in particular nth cousin m-times removed. It begins with n=-1: -1th cousin m times removed is a m-times child (e.g. -1th cousin 0 times removed is oneself). Does anyone know of a verifiable statement of this definition (including, of course, a correct statement of the inductive step, which escapes me at the moment). TomS TDotO (talk) 12:10, 18 September 2014 (UTC)

"domino effect"
Please don't add unreferenced analogies. TThis is original research. ".. as an analogy doesn't need an own reference" - wrong. Analogy needs reference: wikipedia policies does not allow to conclude the correctness of the analogy without reference. -M.Altenmann >t 16:37, 19 February 2015 (UTC)


 * I didn't add this analogy in the first place, it was already present on this page before I started as a wikipedia editor. I found (and still find) it great, and made another, similar, picture later.
 * Trying to verify your above claims, I read through WP:SYNTH which you had given in the edit summaries. The examples there are from the political and the legal area, are hardly applicable to mathematics. In contrast, I found a link to WP:What SYNTH is not, where the sections SYNTH is not explanation, SYNTH is not ubiquitous (it would be, for math articles, if you were right), SYNTH is not obvious II, and SYNTH is not unpublishably unoriginal seem to support my opinion. &mdash; So please give a source for your interpretation of wikipedia policies, or undo the removal of picture and in-text explanation.
 * I think deletion of article parts should be the last means, and found WP:PRESERVE to support this. So even if the domino analogy would need an own reference, you better had tagged it with and awaited the reactions, rather than delete it immediately. - Jochen Burghardt (talk) 21:16, 19 February 2015 (UTC)


 * Btw: I just found the same analogy in German wikipedia at de:Vollständige Induktion, also without source. The picture is still used on 26 non-user pages across all wikis, see commons:File:Dominoeffect.png; almost all of them that I could guess from the title are about induction. I checked a few European languages, and found many mentioning something that could be a translation of domino effect (guessed from similarity to picture caption), but didn't find a source anywhere. - Jochen Burghardt (talk) 22:06, 19 February 2015 (UTC)

The analogy is not a simple explanation. An analogy may be false. An analogy may be restricted to certain circumstances. There are many other problems with analogies. An analogy must be proven to be correct. In wikipedia it is done only by references. The fact this analogy is used in other language wikipedias is irrelevant. Every wikipedia has its own policies.

The analogy is poor: in order to ensure that all N dominoes fall, you have to prove the "toppling" for every individual pair of neighbors; whereas the trick of induction is dramatically opposite. (I know how to "cure" the description, but this is beside the point.)

Furthermore, the description of the analogy is sloppy: the text "proves" that all dominoes will fall, i.e., it is written as an application of the induction rather than an explanation. This is a perfect example of the problems with original research: if the text had references, we could have verified what actually the source said. In our case the only way is bickering among wikipedians, i.e., more original research. Therefore, please follow wkipedia policies: when requested to provide a reference, there is no way you can wiggle yourself out of this without providing a reference. -M.Altenmann >t 19:44, 22 February 2015 (UTC)

P.S. Wikipedia policy says that the burden of proof is on the person who wishes to preserve the statement. Unreferenced contested text may be deleted at any time. Out of courtecy, I place the tag, to give you time to find references. I also tagged a similar domino example in "Peano axioms". If not referenced in reasonable time, they will be deleted. -M.Altenmann >t 20:12, 22 February 2015 (UTC)


 * It is your turn to give a reference for your claims. Which wikipedia policy implies that the analogy needs a reference? I had guessed from your edit summaries that you meant WP:SYNTH, and established above that you were wrong here; apparently you don't object (that's why I undid your deletions). So please give me a link to the policy page you have in mind now. (Apparently, it is a policy that is solely found in the English wikipedia, but not e.g. in the German - ?) Before that, it doesn't make sense to discuss the analogy itself. - Jochen Burghardt (talk) 09:59, 24 February 2015 (UTC)
 * No we did not establish that I am wrong. I am surprized you are not familiar with the most fundamental policy, which trumps it all, so I thought you are doing what is called WP:Wikilawyering; for these thoughts I apologize. All statements in en: wp need reference. WP:VERIFIABILITY: "All material in Wikipedia mainspace, including everything in articles, lists and captions, must be verifiable. All quotations, and any material whose verifiability has been challenged or is likely to be challenged, must include an inline citation that directly supports the material." See also WP:CITE, WP:RS.  -M.Altenmann >t 15:51, 24 February 2015 (UTC)

Burghardt's argument ignores the simple fact that unreferenced posts can be removed. There have been numerous Wikipedia edit wars in which people say, in effect, I don't need a reference, you need a reference. No, the burden of proof is on the person making the post, not on the person removing the post. Burghardt's post is unreferneced. If, as he claims, a finite but unbounded number of induction steps suffices, he should be able to cite such a proof in the published literature. Failing that, he should not insist on posting unreferenced material. Rick Norwood (talk) 12:57, 23 February 2017 (UTC)

Horse color example
I was taught a slightly different version of that example, viz. (Boldface indicating deviations from current article text): In this version, the step argument is valid, but there is no base case.
 * Basis: we don't care about that.
 * Induction step: Assume as induction hypothesis that within any set of n horses, there is only one color. Now look at any set of n + 1 horses. Number them: 1, 2, 3, ..., n, n + 1. Consider the sets {1, 2, 3, ..., n} and {2, 3, 4, ..., n + 1}. Each is a set of only n horses, therefore within each there is only one color. But if n is sufficiently large, the two sets overlap, so there must be only one color among all n + 1 horses.

Anyway, the article should give examples for issues, and it would be boring to reuse the horse color example again. What about e.g. P(n) being n=n+1, with the inductive step proven by adding 1 on both sides of the induction hypothesis equation? - Jochen Burghardt (talk) 21:53, 19 February 2015 (UTC)
 * Colleague, you are engaging in unreferenced research again. If you were "taught", please provide a reference. Wikipedia does not rely on the memory of wikipedians who were taught something. Not to say, that you were taught not the original version. If you know other published examples of erroneous application of induction, you are welcome to cite them. -M.Altenmann >t 19:49, 22 February 2015 (UTC)

A single-line summary at the beginning of each article...
A single-line summary at the beginning of each article would be of great benefit to the reader. Respect. — Preceding unsigned comment added by 46.233.112.79 (talk) 16:34, 26 December 2015 (UTC)