Wikipedia:Reference desk/Archives/Mathematics/2006 August 4

Formula
I've derived an inequality, and I'm having some difficulty getting it to do what I want. Its purpose is to indicate an upper bound on the variable S, given a prime i. However, both sides of it have an S, and in one of them it's deeply entangled, so it's hard to get it by itself. I've managed to pull it out by letting the bound loosen, but it goes way too far and stops saying anything useful. I'm not entirely sure how to convey its structure in the symbols I know, so I'll skip the sigma notation and just demonstrate it. For consecutive values of i:
 * 3: $$ \left \lceil \frac{s}{3} \right \rceil \ge S$$
 * 5: $$ \left \lceil \frac{s}{3} \right \rceil + \Bigg(\left \lceil \frac{s}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{5} \right \rceil}{3} \right \rfloor\Bigg) \ge S$$
 * 7: $$ \left \lceil \frac{s}{3} \right \rceil + \Bigg(\left \lceil \frac{s}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{5} \right \rceil}{3} \right \rfloor\Bigg) + \Bigg(\left \lceil \frac{s}{7} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{3} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{5} \right \rfloor + \left \lceil \frac{\left \lceil \frac{s}{7} \right \rceil}{15} \right \rceil\Bigg) \ge S$$
 * 11: $$ \left \lceil \frac{s}{3} \right \rceil + \Bigg(\left \lceil \frac{s}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{5} \right \rceil}{3} \right \rfloor\Bigg) + \Bigg(\left \lceil \frac{s}{7} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{3} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{5} \right \rfloor + \left \lceil \frac{\left \lceil \frac{s}{7} \right \rceil}{15} \right \rceil\Bigg) + \Bigg(\left \lceil \frac{s}{11} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{3} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{5} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{7} \right \rfloor + \left \lceil \frac{\left \lceil \frac{s}{11} \right \rceil}{15} \right \rceil + \left \lceil \frac{\left \lceil \frac{s}{11} \right \rceil}{21} \right \rceil + \left \lceil \frac{\left \lceil \frac{s}{11} \right \rceil}{35} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{105} \right \rfloor\Bigg) \ge S$$

The key things to notice are that each just adds to the previous, that the denominators are, within each grouping, the products of all combinations of the primes less than i (except 2), in order of how many primes are used, and that both the sign and whether the term is rounded up or down alternate, being one way when there are an even number of prime factors in the denominator and the other when there are an odd number. (A denominator of 1, or no denominator, is counted then as having zero prime factors.) Now, in keeping this an upper bound on S, any fudging that's done would have to make the sum on the left larger than it is, or at least no smaller. The only thing I've thought of so far is removing all the floors and ceilings and replacing each with +1, since that's the maximum increasing effect each has on the fraction it surrounds, but it turns out that relaxes the bound to pretty much infinity. Any suggestions are appreciated. And, if anyone sees Rick Block, ask him to take a look, this is what I've been working on. Black Carrot 04:40, 4 August 2006 (UTC)


 * Are you looking for a formula that will be true for every p, or a method to generate a result whenever p is given? If the former, I am clueless; however, if the latter, I think you can use the following facts to find the exact solution:
 * The LHS is a non-continuous, piecewise linear function
 * The LHS is "preiodical" in the sense that adding a certain integer to S results in a constant increase of the LHS (in the case of p = 7, increasing S by 105 increases the LHS by 57)
 * Breaks only occur at certain integers (in the case if p = 7, at multiples of 3, 5 and 7)
 * So you can evaluate the LHS, or LHS - S, at a few points (from the left and from the right), and use the periodicity to determing the exact behavior of the function - including the exact region(s) in which the inequailty holds.
 * And about Rick Block, can't you just leave a message in his talk page? -- Meni Rosenfeld (talk) 05:31, 4 August 2006 (UTC)


 * I've taken the liberty of working out the first few cases:
 * p = 3: Necessary and sufficient S &le; 1
 * p = 5: Necessary S &le; 3, sufficient S &le; 2
 * p = 7: Necessary S &le; 8, sufficient S &le; 5
 * p = 11: Necessary S &le; 21, sufficient S &le; 17
 * This shows that in practice, there will be only a few integers you will need to check to find the solution. And it's possible that by further analyzing this one might get some insight into more general expressions. -- Meni Rosenfeld (talk) 05:51, 4 August 2006 (UTC)


 * By the way, I guess the best way to express this as a formula is (for n &ge; 2):
 * $$p_n : \sum_{m=0}^{n-2} \; \sum_{A \subseteq Z_{m}} \left \lceil (-1)^{|A|} \left \lceil \frac{S}{p_{m+2}} \right \rceil \prod_{k \in A} \frac{1}{p_{k+2}} \right \rceil \ge S$$
 * -- Meni Rosenfeld (talk) 09:44, 4 August 2006 (UTC)

Yeah, I tried almost exactly that, but I don't see a way to make the floor and ceiling alternate using a standard notation. As it is, that one only rounds things up, which in the subtraction steps would lead to a lowering of the bound that can't be trusted. Similarly, rounding everything down would cause problems in the addition steps, which is why I just wrote it out the long way. What I'm looking for right now, to answer the earlier question, is an upper bound on S given each prime value of the other variable (except 2). What do you mean by "necessary" and "sufficient"? Black Carrot 19:08, 4 August 2006 (UTC)


 * I've made some progress. The left side, as a function of S with a constant p, can be very closely approximated by a line y = aS, where a = 1-(1-1/3)(1-1/5)(1-1/7)...(1-1/p), and can be given a linear upper bound a constant distance above that. Black Carrot 23:51, 4 August 2006 (UTC)


 * Further progress, of a sort. Given the linear bound on the left side y = aS + b, I can say for certain that b will never reach 2n-1-1, where n is the index of prime p. So, solving what is then a linear inequality, I get $$S \le (2^{n-1}-1)\prod_{p=3}^{p_n} \frac{p}{p-1} $$. This is, I believe, at or only marginally above an exponential bound. Can anyone tell me whether an exponential is a better bound than a primorial? Black Carrot 05:06, 5 August 2006 (UTC)


 * About the notation: Didn't you see how I've put the alternating sign inside the ceiling function? I've used the formula
 * $$\lceil -x \rceil = - \lfloor x \rfloor$$
 * while taking advantage of the fact that conveniently, the floors appear together with the negative sign. By necessary and sufficient I meant exactly that - in the case of p = 11, if S &le; 17 then the equality holds, and if the equality holds then S &le; 21. The latter is what you essentially asked for - the former is just a bonus. I understand that what you are looking for is a formula, and not an algorithm. In this area you have progressed more than me, and all I can say is that your bound matches my empirical calculations (in which the exact bound more than doubles for each increment of n), and that 2n is better than pn# - they include the same number of factors, but the factors of the latter are larger. -- Meni Rosenfeld (talk) 11:34, 5 August 2006 (UTC)


 * Awesome. Thank you very much. And you're right about the ceilings, I don't know why I didn't think of it, I've been doing exactly that on my calculator for years. So, given the calculations you've done, do you think the bound I derived is on the same order of magnitude (ie, just above 2n) as what it actually winds up at? By that I mean, do you think it's worth seeing if it could be, say, just polynomial? Black Carrot 19:53, 5 August 2006 (UTC)


 * Of course, by checking just a few cases we can't rule out the option of a polynomial, but it seems extremely unlikely. Something a little above 2n gives a much better match, so this is what I think the efforts should be focused on. For your reference, the values of S where the inequality is first broken (to their right) are, for consecutive n : {1, 2, 8, 17, 38, 82, 172, 356, 752, 1603, 3389, 7046}. These values are not exactly what your question was about, but they're related and slightly lower (so your bounds are slightly higher). -- Meni Rosenfeld (talk) 21:32, 5 August 2006 (UTC)


 * Greatly appreciated. One thing further: I think I can prove a slightly lower bound, which might make for a shallower curve. It's just a little different:
 * 3: $$ \left \lceil \frac{s}{3} \right \rceil \ge S$$
 * 5: $$ \left \lceil \frac{s}{3} \right \rceil + \Bigg(\left \lceil \frac{s}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{5} \right \rceil}{3} \right \rfloor\Bigg) \ge S$$
 * 7: $$ \left \lceil \frac{s}{3} \right \rceil + \Bigg(\left \lceil \frac{s}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{5} \right \rceil}{3} \right \rfloor\Bigg) + \Bigg(\left \lceil \frac{s}{7} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{3} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{5} \right \rfloor + \left \lceil \frac{\left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{5} \right \rfloor}{3} \right \rceil\Bigg) \ge S$$
 * 11: $$ \left \lceil \frac{s}{3} \right \rceil + \Bigg(\left \lceil \frac{s}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{5} \right \rceil}{3} \right \rfloor\Bigg) + \Bigg(\left \lceil \frac{s}{7} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{3} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{5} \right \rfloor + \left \lceil \frac{\left \lfloor \frac{\left \lceil \frac{s}{7} \right \rceil}{5} \right \rfloor}{3} \right \rceil\Bigg) + \Bigg(\left \lceil \frac{s}{11} \right \rceil - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{3} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{5} \right \rfloor - \left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{7} \right \rfloor + \left \lceil \frac{\left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{5} \right \rfloor}{3} \right \rceil + \left \lceil \frac{\left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{7} \right \rfloor}{3} \right \rceil + \left \lceil \frac{\left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{7} \right \rfloor}{5} \right \rceil - \left \lfloor \frac{\left \lceil \frac{\left \lfloor \frac{\left \lceil \frac{s}{11} \right \rceil}{7} \right \rfloor}{5} \right \rceil}{3} \right \rfloor\Bigg) \ge S$$


 * As you can see, writing it as a sum gets even harder, but I can guarantee (since many oppositely-pointed rounding signs are added) that it's at least a little lower than the other, and I think they'll diverge significantly. Black Carrot 04:48, 6 August 2006 (UTC)

Consistency
I'm reading a book on Gödel's Incompleteness Theorem (more specifically, Gödel's Theorem: an incomplete guide to its use and abuse), has a claim I don't quite follow:
 * "We can now state a couple of basic connections between negation, provability, consistency, and undecidability. The first connection consists in the fact that an inconsistent theory has no undecidable statements. This is because in an inconsistent theory, every statement in the language of the theory is provable, by a rule of reasoning known in logic by the traditional name of ex falso quodlibet, "anything follows from a falshood." (It should really be "anything follows from a contradiction.") - pages 18-19

I don't get this. I understand that, having found a contradiction, it's possible to backtrack and prove the opposite of everything that could be proven in the first place (although I'm not certain that would work in all cases, I'm willing to go with it), but what's this about undecidable things becoming decidable whenever there's an inconsistency in something else? Black Carrot 04:54, 4 August 2006 (UTC)


 * Take a look at Truth table. If P is a false statement, and Q is any statement you can imagine, then the statement P → Q is true. This is the meaning of "anything folloows from a falsehood\contradiction". Since implication is transitive, if you have shown that a certain system can prove some statement which is false, then anything follows from this falsehood, meaning that you can prove any statement - so all statements are true. In this case, you can of course decide for each statement if it is true or not - it is true. But an inconsistent theory is exactly one in which you can prove a contradiction. So, in an inconsistent theory, you know every statement is true. -- Meni Rosenfeld (talk) 05:08, 4 August 2006 (UTC)


 * Rather than you know every statement is true in an inconsistent theory you can prove that every statement is true as well as you can prove them false. (Igny 14:05, 4 August 2006 (UTC))


 * Yeah, that was my intention. -- Meni Rosenfeld (talk) 14:40, 4 August 2006 (UTC)


 * A source of confusion may be that "undecidable" has several meanings. The meaning here is apparently the same as "independent". --Lambiam Talk 17:44, 4 August 2006 (UTC)


 * Be careful with the word "true". In an inconsistent theory, everything is provable, not necessarily true. In fact, once you know the theory is inconsistent, you know that at least one of its axioms is not true (or perhaps, that at least one of its rules is not sound). --Trovatore 17:54, 4 August 2006 (UTC)


 * The reasoning used by Gödel is like a Klein bottle version of Ouroboros. Mathematicians of the time supposed that if a statement was true in an axiom system, then it could be proved true in that axiom system. Though this seems natural and "obvious", Gödel showed we had a nasty choice: Any sufficiently powerful system of axioms is either incomplete or inconsistent. If we can prove everything that is true (the system is complete), then we can prove anything (the system is inconsistent). Actually, what he showed is worse still. Using only enough axioms to describe simple integer arithmetic, he constructed a single true statement that forced the choice.
 * This was not a mundane "proof by contradiction"; mathematicians had used such proofs for millennia. No, this was an attack on the idea of proof itself, on the systems of axioms and rules of deduction within which a proof was performed. The truth of the statement within the system was not in question; the question was whether the system itself could prove its truth. Gödel cleverly constructed the statement so that proving the statement true would force the system to be declared inconsistent.
 * Computer science uses much the same tactic in the halting problem. It may be helpful to study the reasoning in such a context, separate from logic jargon and Gödel numbers. --KSmrqT 17:56, 4 August 2006 (UTC)


 * There's no such thing as "true in an axiom system"; axiom systems are syntactic rather than semantic. Yes, before Gödel, it was possible to hope that it might be possible to blur the distinction, but we should not do so now, even when describing matters in retrospect. --Trovatore 19:54, 4 August 2006 (UTC)


 * Please forgive me if I use somewhat informal language in trying to convey the essence of an idea. Logic and proof theory, done formally and carefully, bring a tremendous amount of new jargon and interpret even familiar words in confusing new ways. Then we get the ideas themselves, which are notoriously disorienting to many, as the original question bears witness.
 * For comparison, our article on entailment says the following:
 * "Ideally, semantic implication and logical implication would be equivalent. However, this may not always be feasible.  (See Gödel's incompleteness theorem, which states that some languages (such as arithmetic) contain true but unprovable sentences.)"
 * Frankly, I thought it would be unhelpful to start flinging about things like entailment, tautology, modus ponens, modus tollens, or cut elimination. (In fact, I started down that path then reconsidered!) But perhaps you can succeed where I abstained.
 * A more complete, and perhaps more careful, examination can be found here. --KSmrqT 20:58, 4 August 2006 (UTC)


 * I don't object to informal language. Actually the ideas are not that hard to get across informally, if you're willing to adopt a Platonist perspective, which is the one that makes the ideas clearest: Statements of arithmetic are either really true or really false, and this has nothing to do with axioms at all, but rather with the natural numbers, which are real (though abstract) objects independent of our reasoning about them. However, any finite collection of axioms about the naturals (or even c.e. collection, but we can defer that point) is inadequate to prove all those truths, without proving falsehoods as well.
 * Then you can point out that you don't really have to be a Platonist to understand the point (though it helps); there are awkward workarounds for other philosophical points of view. But even for those it's not helpful to confuse truth and provability; it's better, if you have to, to discard the idea of truth altogether. --Trovatore 21:33, 4 August 2006 (UTC)

I'll come back to some of the other responses, involving the theorem in general, once I understand the answer to my first question. Here's the thing. It seems to me that "undecidable" could be translated to "unreachable", as in, the axioms of the system, plus all possible logical extensions of them allowed in the system, collectively fail to produce either the statement or its opposite. Therefore, it doesn't seem to be a matter of falsehood producing falsehood, which I get, but rather, a matter of falsehood magically extending the reach of the system to something it's not capable of producing in the first place. Black Carrot 23:27, 4 August 2006 (UTC)


 * I think you're missing something fundamental, but I can't quite read your mind enough to figure out what it is. Maybe an example will clarify. Suppose you're working in a language that has constant symbols RED, GREEN, and BLUE, and a symbol for greater-than. You have just one axiom, which is
 * RED > GREEN
 * and no other axioms at all.
 * Now it should be pretty clear that the statement
 * GREEN > BLUE
 * is independent of this axiomatic system (I won't get into a proof, but you should be able to see that you haven't mentioned BLUE at all in the axioms, so you can't tell if GREEN is greater than BLUE or not).
 * Now, though, suppose you add the following axiom:
 * &not;(RED > GREEN)
 * (to be read, "RED is not greater than GREEN"). Now your inconsistent axiomatic system is capable of proving GREEN > BLUE, and it's also capable of proving &not;(GREEN > BLUE). Hope this helps, --Trovatore 00:18, 5 August 2006 (UTC)


 * I don't yet understand the answer, but I appreciate the framework. Now, your "independent" seems to be expressing the same thing as my "unreachable", the fact that the first system says nothing about the given statement at all. How, though, is this system now capable of proving GREEN > BLUE, just because both RED > GREEN and &not;(RED > GREEN) are now provable? As far as I can see, neither the axioms on their own nor the logic that extends them says anything about BLUE, in part because you didn't actually specify any rules of logic to draw conclusions with, not even some sort of transitivity property. All we have is the axioms, neither of which mentions BLUE, and neither of which uses the > symbol in a context other than the relationship between RED and GREEN. Black Carrot 04:55, 5 August 2006 (UTC)


 * Here's the proof that GREEN > BLUE. Suppose otherwise. Then RED is greater than GREEN (axiom), and RED is also not greater than GREEN (another axiom). But that's a contradiction. Therefore the assumption, namely that GREEN was not greater than BLUE, must have been false. Therefore GREEN > BLUE.
 * This might sound a little silly, since you didn't actually use the assumption to get the contradiction, but it's a perfectly valid proof in classical logic. There is a school that is so offended by the apparent silliness that they prefer what is called relevant logic, but AFAIK there's no one canonical answer as to what relevant logic actually is. (I do think that the natural-language meaning of the word "if" cannot be formalized in classical logic, and that you have to use some flavor of relevant logic to do it, so I'm not totally against the concept; I just don't think it's very effective for understanding mathematics.) --Trovatore 06:22, 5 August 2006 (UTC)


 * There is invisible background, hidden in the presumption of classical logic. In classical logic, for example, we have tautologies that we can assert regardless of what we use to fill in the blanks. For example, for any proposition A we can say that A → A, or that (¬A)∨A, or that false → A. Also, given A → B and given A we can deduce B.
 * To construct a particular "language", a term used here in a highly technical sense, we then add statements relevant to our interest, for example "RED &gt; GREEN". In real life we may know what the sentence "RED &gt; GREEN" means, but for purposes of logical deduction we must ignore all that. The symbols "RED", "GREEN", and "&gt;" are abstract symbols, and their combination is just an abstract sentence. (However, the symbols "¬", "∨", and "→" have pre-assigned logical meaning.)
 * Now suppose we include in our language both A and its negation, ¬A, where A is any proposition at all. In classical logic that is enough to allow us to prove any proposition B. For example, even if A is "water is wet" and B is "pigs can fly", given both A and ¬A we can deduce B. This may seem excessive! But that's (classical) logic. (There are alternative logics, but these have difficulties of their own.)
 * We can formally argue as follows: From ¬A we can deduce (¬A)∨B, which is the same as A → B. But since we also have A, we can now deduce B. --KSmrqT 07:29, 5 August 2006 (UTC)


 * That was what I was missing - the proof by contradiction. Nobody had actually got around to saying how it was proven in the first place. So, it's not exactly that you can prove that GREEN > BLUE has a sensible basis, but rather, that using an indirect proof that all of mathematics relies on (ie, that if statement A isn't false, we're all screwed anyway), we can prove that any statement A had better be false, and therefore is (knock wood). From this, it does indeed follow that, since we constantly base disproofs on the assumption that there's no such thing as a contradiction, any system that has contradictions could well disprove (and therefore prove the opposite of) anything. Did I get that right? Black Carrot 19:44, 5 August 2006 (UTC)


 * You know, I'm not quite sure how that's truly acceptable in formal logic. From a relevant aricle: "In formal logic, reductio ad absurdum is used when a formal contradiction can be derived from a premise, allowing one to conclude that the premise is false. If a contradiction is derived from a set of premises, this shows that at least one of the premises is false, but other means must be used to determine which one." So, what means were used to determine which of the premises (RED > GREEN, GREEN > RED, GREEN > BLUE) is causing the problem? Black Carrot 21:01, 5 August 2006 (UTC)


 * RED > GREEN and &not;(RED > GREEN) (note that this is not the same as GREEN > RED) are axioms, so they can't be causing the problem. -- Meni Rosenfeld (talk) 21:18, 5 August 2006 (UTC)


 * That's just what I was about to say. Note, though, that this is only within the context of the given axiomatic system. In reality, of course, it's precisely the bad axioms that are causing the problem, but that doesn't change the fact that, if you accept those axioms, then you must also accept the conclusion. --Trovatore 21:22, 5 August 2006 (UTC)


 * Proof by contradiction (reductio ad absurdum) is not the same as proof from contradiction. The latter is what I demonstrated; the contradiction cannot be avoided, and so we can deduce any proposition. The former is a game of "what if", where we attempt to add a statement to our language, discover that forces a contradiction, and deduce the negation of the statement (which avoids the contradiction).
 * A note to our logic editors: As I've glanced through our logic-related articles in responding to this thread, I've been mildly disappointed at the quality. I know logic is not as popular as complex analysis, but a little attention while school's out might be nice. Just a thought.
 * Although our editors and articles may be of some help in coming to grips with Gödel, there is no substitute for an author like Raymond Smullyan. If the book you're reading is leaving you confused, try something by Smullyan. For more entertainment mixed with your mathematics, you might look for a copy of the Pulitzer Prize-winning Gödel, Escher, Bach (ISBN 978-0-465-02656-2) by Doug Hofstadter. --KSmrqT 23:33, 5 August 2006 (UTC)


 * Thanks, I will. If you don't mind, I don't quite see the difference between the two. Could you give an example of each, and point it out to me? For instance, which does Trovatore's example count as? Black Carrot 00:20, 6 August 2006 (UTC)


 * Yeah, K, I'm sorry but I have to say I don't see any distinction here that's important in classical logic. The proof I gave is a reductio ad absurdam. Granted, it's an unusual one in that the assumption-towards-a-contradiction is never used in any nontrivial way, but in classical logic it doesn't need to be; you can "use" an assumption simply by stating it as part of the proof. --Trovatore 02:20, 6 August 2006 (UTC)


 * I assume we're talking about the situation with two axioms "RED &gt; GREEN" and "&not;(RED &gt; GREEN)". If so, I would not call my proof a reductio ad absurdum. Your proof and your terminology only add to the confusion.
 * What you should do (IMHO) is reinforce the simpler proof that I gave, using only transparent tautologies, where I show that "the presence of an inconsistency allows us to deduce anything". That's what Black Carrot still doesn't get.
 * To quote, "I'm not quite sure how that's truly acceptable in formal logic." Then comes a mention of reductio ad absurdum, which is not what I used. This is the confusion of the original post reasserting itself. Unless you think there is a problem with my proof, you should help Black Carrot accept it as valid, not go running off after a reductio red herring. --KSmrqT 06:06, 6 August 2006 (UTC)
 * Your proof is fine; I don't see how it's "simpler". I think it's a bit harder to understand, actually. It hides the non-relevant use of the inconsistent axioms in one of two steps: Either the identity of &not;A&or;B with A&rarr;B or (if you take that to be the definition of &rarr;) modus ponens itself. --Trovatore 06:26, 6 August 2006 (UTC)


 * Yeah, I think I get your proof, and I read what you linked to, but really, it sounds like you're saying the same thing as Travatore, except 1) you did it with symbols, and 2) you quoted more specific rules. It seems, though as though 1) the rules were designed specifically for the purpose of guiding us reliably, and 2) that's exactly the kind of logical flaw that should have been ironed out by now. I don't mean to be difficult, but I don't see that either formulation of it is a necessary and integral part of the rest of the logical system, so why would we use something specific that's so fraught with peril? Or why wouldn't we give it a conditional certainty, like we do with conjectures? Plenty of things (or so I've been told) start with "assume the Riemann Hypothesis is true...", why couldn't they say "assume my premises are actually consistent..."? Black Carrot 06:24, 6 August 2006 (UTC)


 * You've mixed together so many things it's difficult to respond clearly.
 * Trovatore gave a specific proof of a specific example showing how a contradiction can lead to the deduction of an unrelated statement. (The generalization should be obvious.) I gave a different proof, directly demonstrating the generalization, the fact that a contradiction always leads to the deduction of all statements. We both used classical logic. We're both explaining the same idea.
 * So, you feel uncomfortable with the conclusion, and now want to attack the system that produced it. You're not the first! I've already pointed you to alternative logics; what more do you want? Shall I try to justify classical logic? It's certainly stood the test of time and proved its utility. Shall I try to walk you through the intricacies of alternative logics and explain how they avoid some classical difficulties but introduce new ones?
 * There's not much reason to introduce a meta-assumption about consistency; without consistency the whole deductive apparatus is useless, as we have shown. Anyway, you're getting way over your head. A reference I don't think I've mentioned before is the basic text by Enderton, A mathematical introduction to logic, 2/e (ISBN 978-0-12-238452-3); also see our article on mathematical logic, and Rusin's Mathematical Atlas page on mathematical logic and foundations (both of which include further references). And if that's not enough, try this list.
 * But don't assume that alternative logics will let you escape Gödel or Tarski; consider that we'd like to be able to verify deductions with a computer, and computers (formalized as Turing machines) have their own halting problem, a form of incompleteness. The problem is remarkably fundamental. --KSmrqT 08:46, 6 August 2006 (UTC)

I'm aware that I know nothing. This is the first book on logic I've read that didn't skip the theory and development and go straight to the simplest mechanics. And I will, of course, go through the introductory texts you've suggested. If you'll glance back, though, I believe I was fairly careful to stick to my original question, which was prompted by an unclear and unexplained claim at the beginning of the book. I now understand at least in what direction the claim was aiming, but as my subsequent questions have apparently illustrated, I still don't get the root problem. I am not, however, entirely slow, neither am I convinced I'm yet in over my head. Neither am I, by the way, childish enough to hate logic for an inconvenient truth. To continue, you just repeated something, "We both used classical logic. We're both explaining the same idea," that I had said one paragraph before, "it sounds like you're saying the same thing as Travatore." We even seem to have included the same nuances, such as your use of more general symbols. The reason I felt the need to point that out was that you seemed to say the opposite; reading back, though, it seems you were railing against something slightly different, which I can't quite get a handle on. It seems, I would say, that you consider your proof simpler and purer than his, but essentially the same, and consider neither to be a reductio ad absurdum, which you think was a misleading thing for me to bring up in the first place. Is that right? If so, using your words as best I can understand them, it seems the distinction between them is that one (the bad one) observes that statement A leads to a contradiction that wouldn't otherwise be there, and therefore refutes A, whereas the other (the good one, the one the book was talking about) observes that there exists a contratiction somewhere, and therefore automatically confirms A, regardless of what A is. How far off am I? Black Carrot 02:45, 7 August 2006 (UTC)


 * Trovatore and I are making the same point, yes; but our proofs are not the same. Trovatore did give a reductio style proof; I did not. It is true that I view introducing reductio as poor pedagogy, in the context of this thread; but I would feel uncomfortable claiming that Trovatore's proof was "bad" or "less pure".
 * If I may paraphrase reductio, it says "Assume &not;A; deduce some contradiction; conclude A." What Trovatore and (apparently) the book both say is: if the language already contains a contradiction, then we can always deduce that contradiction, and therefore we can always conclude A. It doesn't matter what A is, it doesn't matter that we don't use A in the deduction, and it doesn't even matter that the contradiction may have no visible connection to A.
 * I have no objection to the validity of this argument, I just think it's unnecessarily confusing. The last thing you need is more confusion.
 * In this area you are going to need to read very carefully, and to begin to notice distinctions that you've never observed before. For example, you say you do not hate logic. What I suggested was different: that you objected to the conclusions of classical logic; we have many different logics — plural — to choose from. One thing that suggests you're in over your head is that you continue to miss distinctions that are important. That's OK, I suspect it happens to everyone learning mathematical logic. I still slip up and do it myself from time to time.
 * Now, let's go back to your original question about decidability. Take any statement, A, let's say in a first-order language describing the natural numbers with addition and multiplication and so on. If I can take the axioms of the language, add in all the things that come with classical logic (tautologies, modus ponens), and deduce A, then A is decidable. If I can instead deduce &not;A, then A is decidable. That is, decidability is about being able to reach a conclusion. If no matter what I do I cannot deduce either one, then A is undecidable.
 * But here's one place where we need to be very careful. The process of deduction takes place in our system of logic. It is a purely mechanical application of rules. We are not allowed to look at an actual example of natural numbers (a model) and say, "I can deduce A because I can see it's true in the model." In fact, that's the heart of what Gödel is worrying about.
 * It may very well happen, says Gödel, that something is true (in our model), but that our system of logic cannot deduce the assertion.
 * Or, says Gödel, we may be able to show that our system contains a contradiction. In that case, we can deduce A, we can deduce &not;A, we can deduce anything at all. Therefore nothing is undecidable. But, of course, our system of logic is now totally useless!
 * So Gödel confronts us with a choice: choose incompleteness, or choose inconsistency. We don't like not being able to prove things that are true, but we'd rather live with that than live with the consequences of inconsistency — which are catastrophic.
 * To make these ideas more tangible, consider this. Ten years ago mathematicians could speculate that Fermat's Last Theorem was true (which all the evidence supported), but that we might not be able to prove it. Just maybe it was undecidable. Maybe all our axioms and all our rules of deduction and all the cleverest mathematicians who ever lived would never be enough to produce a proof, even though the theorem described something that was true. Such a situation is possible because the factual truth is a property of the number system itself, but the logical deduction has to take place in the logic system describing the number system.
 * Before Gödel we could suppose that this distinction was not important; now we know better. --KSmrqT 07:22, 7 August 2006 (UTC)


 * Afterword: Lewis Carroll captured the confusion of learning mathematical logic nicely in Through the Looking-Glass, with this famous exchange between Alice and the White Knight:
 * 'You are sad,' the Knight said in an anxious tone: 'let me sing you a song to comfort you.'
 * 'Is it very long?' Alice asked, for she had heard a good deal of poetry that day.
 * 'It's long,' said the Knight, 'but very, very beautiful. Everybody that hears me sing it—either it brings the tears into their eyes, or else—'
 * 'Or else what?' said Alice, for the Knight had made a sudden pause.
 * 'Or else it doesn't, you know. The name of the song is called "Haddocks' Eyes."'
 * 'Oh, that's the name of the song, is it?' Alice said, trying to feel interested.
 * 'No, you don't understand,' the Knight said, looking a little vexed. 'That's what the name is called. The name really is "The Aged Aged Man."'
 * 'Then I ought to have said "That's what the song is called"?' Alice corrected herself.
 * 'No, you oughtn't: that's quite another thing! The song is called "Ways and Means": but that's only what it's called, you know!'
 * 'Well, what is the song, then?' said Alice, who was by this time completely bewildered.
 * 'I was coming to that,' the Knight said. 'The song really is "A-Sitting on a Gate": and the tune's my own invention.'
 * So saying, he stopped his horse and let the reins fall on its neck: then, slowly beating time with one hand, and with a faint smile lighting up his gentle foolish face, as if he enjoyed the music of his song, he began.
 * The author (himself a logician) distinguishes between a thing, what's it's called, the name of the thing, and what the name is called. Also, the comment about tears reflects classical logic's tertium non datur. This was written long before Gödel. --KSmrqT 23:47, 7 August 2006 (UTC)

Series
A while ago, I was given a math puzzle, I'll put up here. I've answered a few of the questions, but would still like to hear the answers to the others if anyone can figure them out. In the following sequence: 0, 1, 10, 2, 100, 11, 1000, 3, 20, 101, 10000, 12, 100000, 1001, 110, 4, 1000000, 21 ...
 * 1) 1 What are the next 5 numbers?
 * 2) 2 What is the value of the element at index 50?
 * 3) 3 Does any number other than twelve equal its index?
 * 4) 4 Can this be represented as a 1:1 function?

Thanks, 48v 18:02, 4 August 2006 (UTC)


 * SPOILER WARNING.
 * Those who want the pleasure of playing with the puzzle themselves, should not visit The On-Line Encyclopedia of Integer Sequences immediately.
 * There this is listed (sequence A054841) as a puzzle sequence posted to news:rec.puzzles in 1997 by Evans A. Criswell.
 * It's a great resource to know about. --KSmrqT 18:39, 4 August 2006 (UTC)


 * Thank you for that, I've seen it before. It does answer the first two questions (which I solved some time ago but left for interested mathematicians as I thought they were decent puzzlers.) However I don't see an answer to the second two. One could argue that they offer a 1:1 function, but it requires the use of factoring. I should have asked if there was an explicit function. As for the third question, I cannot find any way to rigorously prove the non-existance of other such numbers, although I've shown exaustively that there are none below 10^8. Any idea(s) about the remaining two questions? Thanks again, 48v 18:49, 4 August 2006 (UTC)


 * The 3rd and the 1024th element are both 10, so this is not a bijection. --Lambiam Talk 20:33, 4 August 2006 (UTC)
 * The problem of finding a number like 12 can be phrased as: find a number that is the Gödel number of its own reverse (in decimal representation). For example, 12 reverse is 21, which has Gödel number 2231 = 12. (This is not the "official" Gödel number, and the mapping is not injective since it disregards trailing zeros.) A related problem is finding numbers that are their own Gödel numbers, without the reversal. Such numbers are known as Meertens numbers. Only one Meertens number has been found. Maybe we can call 12 a Snetreem number. --Lambiam Talk 01:49, 5 August 2006 (UTC)


 * 81312000 = 28 * 31 * 53 * 71 * 112 * 130 * 170 * 190 —Preceding unsigned comment added by 160.129.27.169 (talk) 21:24, April 9, 2009 (UTC)

Statistics Question
What is the total number of American homeowners?

What is the breakdown per state? —The preceding unsigned comment was added by 216.70.145.8 (talk • contribs).


 * This is not a math question.  Try posting this question at Reference Desk (misc) SubSeven 19:43, 4 August 2006 (UTC)