Wikipedia:Reference desk/Archives/Mathematics/2010 August 9

= August 9 =

Stone-Cech Compactification
Hi, Does anyone have an idea how to calculate the power of the Stone-Cech compactification of a given space X with a given power? Is it 2^X? And if so, how do I show it? Thanks! —Preceding unsigned comment added by Topologia clalit (talk • contribs) 06:17, 9 August 2010 (UTC)
 * Certainly not in general. For example &beta;N is the set of ultrafilters on N (with the appropriate topology), so it has cardinality $$2^{2^{\aleph_0}}$$.  On the other hand, if X is already compact, then &beta;X is just X (I think).  So I don't believe there's a general formula independent of the topology on X. --Trovatore (talk) 06:22, 9 August 2010

(UTC)

Thanks. But can you explain why &beta;N has cardinality $$2^{2^{\aleph_0}}$$? Why isn't it just $$2^{\aleph_0}$$? Topologia clalit (talk) 06:37, 9 August 2010 (UTC)
 * Well, on the face of it, an ultrafilter on N is a set of subsets of N, not a subset of N, so you'd kind of expect there to be $$2^{2^{\aleph_0}}$$ of them. Of course not every set of subsets of N is an ultrafilter, so this isn't a proof.  Nevertheless it's true.  It's a theorem of some dude named Pospisil or something like that, with diactritical marks in various places.  You can look it up in Jech, Set Theory. --Trovatore (talk) 06:43, 9 August 2010 (UTC)
 * Ah, turns out someone had asked about this a long time ago on sci.math, and I wrote up Pospíšil's proof and posted it. You can find it at http://www.math.niu.edu/~rusin/known-math/00_incoming/filters . --Trovatore (talk) 06:48, 9 August 2010 (UTC)

OK Thanks! I'll have a look and try to figure it out. Topologia clalit (talk) 07:12, 9 August 2010 (UTC)
 * And of course since you can build the Stone–Čech compactification out of ultrafilters of zero-sets, β of any Tychonoff space X has cardinality at most $$2^{2^{|X|}}$$. Algebraist 09:52, 9 August 2010 (UTC)
 * For specific spaces, one can often get a better bound using the same method. For example, there are only $$2^{\aleph_0}=|\mathbb R|$$ zero sets (or even closed sets) in $$\mathbb R$$, hence $$|\beta\mathbb R|\le2^{2^{\aleph_0}}=2^{|\mathbb R|}$$. (In fact, it's easy to see that one can embed βN in βR, hence the cardinality of βR is exactly $$2^{2^{\aleph_0}}$$.)—Emil J. 11:41, 9 August 2010 (UTC)

Thanks a lot Guys. I can see now that the cardinality of the Stone–Čech compactification can not be more then $$2^{2^{|X|}}$$. Even just because filters are sets of subsets of X. But, why can one say that there are only $$2^{\aleph_0}=|\mathbb R|$$ zero sets or closed sets in $$\mathbb R$$? Also, if you conclude from the fact that one can embed βN in βR, that the cardinality of βR is exactly $$2^{2^{\aleph_0}}$$, does that mean that we know that $$|\beta\mathbb N|=2^{2^{\aleph_0}}$$? I mean, if we take N with the discrete topology then I guess that every subset of N is clopen and therefore can be a zero set? How can you be sure that $$|\beta\mathbb N|<2^{2^{\aleph_0}}$$? less then $$2^{2^{\aleph_0}}$$ is not the case? Topologia clalit (talk) 08:50, 10 August 2010 (UTC)
 * Every closed set corresponds to exactly one open set (its complement). To see there are only $$2^{\aleph_0}$$ open subsets of R, note that every open set is the union of open intervals of the form (r,s), where r and s are both rational.  There are only countably many intervals of that form, so there are only $$2^{\aleph_0}$$ possible unions of them.  The answer to your other question is the Pospíšil result again. --Trovatore (talk) 08:55, 10 August 2010 (UTC)

You are right, I should take a second look at the Pospíšil result and proof. But still I miss something here. Why can't we have the interval $$ (\sqrt{2},\sqrt{5}) $$ as an open set? —Preceding unsigned comment added by Topologia clalit (talk • contribs) 09:03, 10 August 2010 (UTC)
 * It is an open set. Who said it wasn't? --Trovatore (talk) 09:10, 10 August 2010 (UTC)

I uderstood from what you wrote above that I have to "note that every open set is the union of open intervals of the form (r,s), where r and s are both rational.." I don't get it.. Topologia clalit (talk) 09:18, 10 August 2010 (UTC)
 * Every open subset of R is indeed such a union. Including $$ (\sqrt{2},\sqrt{5}) $$. --Trovatore (talk) 09:17, 10 August 2010 (UTC)

You are defenetly right, sorry for the confusion, somehow i had in mind that you ment that the power of set of open intervals is countable while you actually said that it has the power of continuity... Topologia clalit (talk) —Preceding undated comment added 09:22, 10 August 2010 (UTC).

I keep on thinking about what you have all said above and I wonder, can one characterize, in which cases $$ |\beta X|=2^{|P(X)|} $$? Topologia clalit (talk) 10:25, 10 August 2010 (UTC)


 * I doubt there's any very enlightening characterization. For example, consider X to be the disjoint union of the closed unit interval [0,1] and N (disjoint union so that e.g. the 0 of the closed interval has nothing to do with the 0 of N).  Then I expect, though I haven't proved it, that &beta;X will be homeomorphic to &beta;[0,1] union &beta;N.  Since [0,1] is compact, &beta;[0,1] is just [0,1] and has cardinality $$2^{\aleph_0}$$.  &beta;N as before has cardinality $$2^{2^{\aleph_0}}$$.  So &beta;X has cardinality $$2^{2^{\aleph_0}}$$, and easily X has cardinality $$2^{\aleph_0}$$, so X is one of the spaces you're trying to characterize.  But it's thus for a completely silly reason!  The part of X that makes its cardinality large is the part that plays no role in making the cardinality of &beta;X large. --Trovatore (talk) 18:24, 10 August 2010 (UTC)
 * Oh, now I realize I was answering the wrong question &mdash; I thought you wanted $$ |\beta X|=2^{|X|} $$. --Trovatore (talk) 18:26, 10 August 2010 (UTC)
 * &beta; is left adjoint to the inclusion functor, and hence preserves colimits. So as you suspected, &beta; of a disjoint union is the disjoint union of the &beta;s. Algebraist 18:33, 10 August 2010 (UTC)
 * (When the union is finite, that is. Infinite coproducts in the category of compact Hausdorff spaces are more fancy than disjoint unions.)—Emil J. 18:41, 10 August 2010 (UTC)

I'll have to think about it for a while.. Thanks! 77.125.113.164 (talk) 09:41, 11 August 2010 (UTC)

Total differential
Good day. As we know, a total differential for
 * $$z=x+y$$

is
 * $${\operatorname dz}=\frac{\partial z}{\partial x}\operatorname dx + \frac{\partial z}{\partial y} \operatorname dy$$

but what is the total differential for
 * $$z=f(x,y)+g(t)$$

? --Merry Rabbit (talk) 14:05, 9 August 2010 (UTC)


 * If x, y and t are independent then it is


 * $$dz = \frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy + \frac{dg}{dt}dt$$


 * On the other hand, if x and y depend on t then it is:


 * $$dz = \left( \frac{\partial f}{\partial x}\frac{dx}{dt} + \frac{\partial f}{\partial y}\frac{dy}{dt} + \frac{dg}{dt}\right) dt$$


 * Gandalf61 (talk) 14:54, 9 August 2010 (UTC)

what's wrong with this proof?
What's wrong with this proof? This is not homework. 85.181.49.221 (talk) 21:13, 9 August 2010 (UTC)
 * There are a couple of claimed flaws in the blog comments here (comments #28 and #55, and possibly others). -- BenRG (talk) 02:45, 10 August 2010 (UTC)
 * And a summary of possible issues, including those two, here. It will be a few days/weeks/months before anyone is sure of anything. -- BenRG (talk) 03:36, 10 August 2010 (UTC)

P versus NP problem for dummies
OK, I want to understand how this works but I don't actually understand what polynomial time is or all those other bits are. So here's how I think it works and you guys can tell me if I'm right or wrong on the level that I'm working at, which is very low.

Some things are easy to figure out using computers and some things are very hard. Some things are easy to figure out an answer to and easy to prove that an answer is correct for. This is P. On the other hand, some things are easy to prove an answer for but very difficult to figure out an answer to: for example, it's easy to prove that two prime numbers multiply into another number, but it's very tricky to take that number again and figure out the two prime numbers that we started with. These are NP problems. But what the P versus NP problem is is proving that P problems and NP problems are ACTUALLY different and not that we're just stupid and haven't figured out easy ways to do these NP problems yet. Do I have this right? —Preceding unsigned comment added by 88.108.242.37 (talk) 21:50, 9 August 2010 (UTC)
 * Almost. The thing you're missing is that all P problems are automatically NP.  NP doesn't say that it's hard to figure out the answer; it only says that it's easy to check the answer.  NP-complete says it's hard to figure out the answer (or at least as hard as it can be for an NP problem). --Trovatore (talk) 22:16, 9 August 2010 (UTC)
 * OK, sweet. 88.108.242.37 (talk) 22:23, 9 August 2010 (UTC)


 * Also, there are (probably) more than two levels of difficulty here. Factoring, for example, is likely not in P but it's also thought to be easier than the most difficult problems in NP (the NP-complete problems). A proof that P ≠ NP would tell you that the NP-complete problems are not in P, but wouldn't necessarily tell you anything about the difficulty of factoring. And one can argue with the idea that problems in P are "easy". If the best known solution for a certain problem takes n100 steps for a case of size n, then that problem is in P, even though the algorithm is unusably slow in practice. Likewise an algorithm that takes Kn² steps, where K is Graham's number or something equally ridiculous. It's very rare for the best known algorithm for a problem to have a run time like that, but I think there are a few examples (?). -- BenRG (talk) 03:02, 10 August 2010 (UTC)
 * Conversely, not being in P doesn't make it slow. 1.00000000000001n isn't polynomial, but it's still very quick for any size of n you are likely to come across in a real-world problem. --Tango (talk) 03:59, 10 August 2010 (UTC)


 * I feel that you shouldn't use "hard" and "easy". You should use "time-consuming" and "quick".  Factoring, which BenRG used, is an example of why "hard" and "easy" are terribly confusing.  Factoring is very easy.  Want to factor 28475938273?  See if divides evenly by 2, then 3, then 4, then 5, then 6...  That is easy, but time consuming.  Checking it is easy also.  If I said the factors were 17 and 295, you just multiply the two and see if you get 28475938273 as an answer.  It is easy and quick.  So, you can see that factoring is not "hard".  It is time-consuming.  To date, there is no quick way to factor large numbers.  So, it doesn't have a polynomial-time solution.  You can read "polynomial-time" to mean "quick".  However, checking the solution is very quick - so checking it is polynomial-time. --  k a i n a w &trade; 03:14, 10 August 2010 (UTC)
 * A good indication of why, when you want to be fully accurate about such things, the best approach is to learn the technical language and then use it correctly. --Trovatore (talk) 03:55, 10 August 2010 (UTC)
 * I wouldn't use "quick" either, for the reason given by Ben above. Just say "solvable in polynomial-time" and be done with it. Anything else is likely to be wrong in some way. --Tango (talk) 03:59, 10 August 2010 (UTC)
 * The problem is defining "polynomial time" for "dummies" (as the question states). In order to do so, you need to discuss upper bounds, big O, and things like that.  So, giving a "dummy" a word like "quick" allows the dummy to latch on to a word that has meaning.  Giving the dummy "polynomial time" makes as much sense as saying NP problems are a duck while P problems are a goose.  If you can define what polynomial time means without getting into more complicated topics like big O, please do. --  k a i n a w &trade; 04:21, 10 August 2010 (UTC)
 * If you want to actually know what's going on, you have to learn about upper bounds and things like that. At a level short of that, "hard" and "easy" are about as good as anything else. --Trovatore (talk) 06:01, 10 August 2010 (UTC)
 * Some things just can't be understood by dummies. P=NP is a very technical problem and require technical knowledge to understand it. We just have to accept that. --Tango (talk) 13:17, 10 August 2010 (UTC)


 * You don't need to explain big O, since "O(polynomial)" is the same as "solvable in polynomially many steps", where "steps" are more or less what you'd expect—"add x and y and call the result z", "if z < w then answer Yes, else go to step 10", etc. The way I used "easy" and "hard" is standard, and I think it's reasonable enough, since the time complexity of the best algorithm does tell you something about the intrinsic difficulty of the problem. If "easy" meant "mindlessly/mechanically solvable, given unlimited time" then practically anything would be easy. -- BenRG (talk) 22:01, 11 August 2010 (UTC)


 * For dummies: A problem may be solved in a slow way and a fast way. Adding natural numbers can be done by counting: 518+357 = 519+356 = 520+355 = ...= 874+1 = 875+0 = 875, or by computing: 518+357 = (5&middot;100+1&middot;10+8)+(3&middot;100+5&middot;10+7) = (5&middot;100+3&middot;100)+(1&middot;10+5&middot;10)+(8+7) = (5+3)&middot;100+(1+5)&middot;10+(8+7) = 8&middot;100+6&middot;10+15 = 875. The time used by some method to solve a problem depends on the amount of data in the problem. Bigger problems take more time. The degree of some function f is limx&rarr;&infin;log|f(x)|/log(x). For polynomials this use of the word degree matches the standard use of the word. Exponential functions have infinite degree, and logarithms have degree 0. The time used for addition by counting has infinite degree, while addition by computing has degree 1. The standard multiplication algorithm has degree 2, Karatsuba multiplication has degree 1.585, and the Schönhage–Strassen algorithm has degree 1. All known methods for integer factorization has infinite degree, but checking that some integer factorization is correct has finite degree. The N=NP hypothesis says that if checking a solution has finite degree, then solving the problem has also finite degree. Bo Jacoby (talk) 08:28, 10 August 2010 (UTC).
 * Not all problems can be solved in a slow way and a fast way, that's the point (if all problems could be solved in polynomial-time then P=NP would be trivial). I'm not sure what a list of multiplication algorithms was supposed to add to the explanation, either... --Tango (talk) 13:17, 10 August 2010 (UTC)
 * And what are the dummies supposed to do with "limx&rarr;&infin;log|f(x)|/log(x)"? -- k a i n a w &trade; 18:58, 10 August 2010 (UTC)
 * No, not all problems can be solved both in a slow way and a fast way, and that's why I wrote that it may be solved in a slow way and a fast way. I wrote later that 'all known methods for integer factorization has infinite degree', so that problem apparently cannot be solved in a fast way. The list of multiplication algorithms exemplifies that the problem of multiplication can be solved in ways of different speed. Dummies need examples. The definition degree(f)=limx&rarr;&infin;log|f(x)|/log(x) is elementary compared to complexity theory, and the dummy can verify that if f is the polynomial, f(x)=&Sigma;undefinedNaixi where aN≠0, then degree(f)=N. I should add that polynomial time means finite degree. Thanks for your comments. Bo Jacoby (talk) 07:33, 11 August 2010 (UTC).
 * But you haven't actually defined this function f. To do so, you need to explain about upper and lower bounds and all that complicated stuff. Working it terms of degrees doesn't help at all. Once you've defined the function, you can just say "if it's a polynomial or something that grows slower than a polynomial (such as a logarithm) then we call it polynomial-time". Mentioning degrees doesn't add anything. --Tango (talk) 12:43, 11 August 2010 (UTC)
 * The function f is the computing time as function of the size of the input. I do not need to explain the complicated stuff, and so working in terms of degrees does help. Bo Jacoby (talk) 20:44, 11 August 2010 (UTC).
 * Either way, "polynomial" is easier to understand than "of finite degree", so why not just say "polynomial"? --Tango (talk) 22:19, 11 August 2010 (UTC)
 * Feel free to use what you find easier to understand, of course. To me "polynomial", meaning literally "many terms", is a little confusing, because the important thing is not that it has many terms. Mathematicians say "algebraic" where computer scientists say "polynomial". That x log x is "polynomial" is not nice in mine ears, while the definition shows that it has degree 1. It is a matter of taste. Bo Jacoby (talk) 04:52, 12 August 2010 (UTC).