Wikipedia:Reference desk/Archives/Mathematics/2010 August 12

= August 12 =

Efficiency of a factorization algorithm
Assuming a function Prime(x) that ordinally generates every prime for {x|x>0,x∈ℤ} and generates the primes in polynomial time, what would the efficiency (in layman's terms) be of an algorithm that: --Melab±1 &#9742; 04:48, 12 August 2010 (UTC)
 * 1) is given a number N.
 * 2) exhaustively divides N by Prime(x) until N modulo Prime(x) does not equal zero.
 * 3) returns the result of each division as in number 2, each result being a non-distinct prime factor of N.
 * In layman's terms, it would be grossly inefficient. It's pretty much the same as other variants of trial division.—Emil J. 10:04, 12 August 2010 (UTC)
 * It's time function is basically the prime-counting function, which is asymptotic to x/log(x). However, I think the only way you could have your function Prime run in polynomial-time is if it is simply looking up prime numbers in a pre-existing list (the only way to know what the xth prime is to generate that list and I'm pretty sure you can't generate that list in polynomial-time). --Tango (talk) 18:16, 12 August 2010 (UTC)
 * When I say assume Prime(x) runs in polynomial time, I also mean that I know there is no efficient prime generation formula. --Melab±1 &#9742; 21:42, 12 August 2010 (UTC)

Spaces of Countable tightness - Is the following example OK?
Hi, I'm trying to understand the notion of countable tightness (http://en.wikipedia.org/wiki/Countable_tightness). So I have tried to find a simple example for a space which is not countably tight and came with te following example: Let X be the space of functions from R to {0,1}. Take the set A to be the set of all functions such that f(x) = 1 for a counable (which includes finite) set of x's and 0 otherwise. Then X is not a space of countable tightness since A is not closed in X but the intersection of A with any countable set of functions U is closed in U. Is this example OK? Also, I am trying to figure out, is the space of ultrafilters on R or on N for example is of countable tightness? Thanks! —Preceding unsigned comment added by Topologia clalit (talk • contribs) 08:50, 12 August 2010 (UTC)

Forgot to mention, I am using the point open topology here... Topologia clalit (talk) 08:54, 12 August 2010 (UTC)
 * Yes, this is an example. I prefer to think of Countable Tightness as saying that accumulation points are limits of countable sequences.  Then since A is dense, the constant 1 function is an accumulation point of A, but it is not a limit of any countable sequence from A.  —Preceding unsigned comment added by 203.97.79.114 (talk) 10:22, 12 August 2010 (UTC)

Yes, but, isn't saying that accumulation points are limits of countable sequences, is the definition of a sequential space (http://en.wikipedia.org/wiki/Sequential_space)? I mean, if we use your definition then what is the difference between a sequential space to a space with countable tightness? Topologia clalit (talk) 11:01, 12 August 2010 (UTC)
 * You're right, I misspoke. I should have said "are accumulation points of countable sets".  For an example of the distinction, let $$x$$ be a non-principle ultrafilter on $$\mathbb{N}$$.  Then $$x$$ is an accumulation point of $$\mathbb{N}$$, but for any subsequence of $$\mathbb{N}$$, choose $$A \in x$$ which omits infinitely much of the subsequence.  Then said subsequence is never eventually in $$[A]$$.
 * However, your example is still good. It's not hard to see that the constant 1 function is not an accumulation point of any countable subset of A.  —Preceding unsigned comment added by 203.97.79.114 (talk) 12:33, 12 August 2010 (UTC)

Ok Thanks. So, these two are examples for non-countable tughtness spaces which helps me. Does anyone have an idea for a space which is of countable tightness but is not sequential? I'm still not sure I got the exact difference between these two definitions.. Thanks! Topologia clalit (talk) 13:18, 12 August 2010 (UTC)
 * Wait, I didn't say that $$\beta\mathbb{N}$$ has non-countable tightness; I'm not sure if it does or not.
 * As for an example which has countable tightness but is not sequential, the Arens-Fort space does it. As a countable space, it automatically has countable tightness.  On the other hand, (0,0) is in the closure of the rest of the space, but isn't the limit of any sequence: if the sequence hits a column infinitely often, the open set which omits that column is never eventually entered by the sequence; if it doesn't, the complement of the sequence is an open set containing (0,0).  —Preceding unsigned comment added by 203.97.79.114 (talk) 13:59, 12 August 2010 (UTC)

Thanks!!! Some of it is much clearer. But I think still miss some things here.. I mean, The Arens-Fort space is countable, then can't we make a sequense out of the whole space? Is the problem that our whole space sequence will "get in and out" an infinite number of times for any open neighborhood of (0,0)? I'm not quite sure I understand what does a column mean.. If our pairs is of the form (n,m), then, a column means something like all the pairs of the form (n,5) for example? Topologia clalit (talk) 15:12, 12 August 2010 (UTC)
 * The issue is that the definition of sequence convergence requires that for every open neighborhood of (0,0), the sequence must at some point enter that neighborhood and never leave. So yes, the problem is that for some open neighborhoods, the sequence is going in and out an infinite number of times.
 * And yes, that's what's meant by a column. (I would have said pairs of the form (5,m), but it doesn't matter which version you go with, as long as you pick one and stick with it.)  —Preceding unsigned comment added by 130.195.253.1 (talk) 23:11, 12 August 2010 (UTC)

Got it. Thanks! Topologia clalit (talk) 06:21, 13 August 2010 (UTC)

Pearson product-moment correlation coefficient
According to Pearson product-moment correlation coefficient, the formula is:


 * $$r = \frac{\sum ^n _{i=1}(X_i - \bar{X})(Y_i - \bar{Y})}{\sqrt{\sum ^n _{i=1}(X_i - \bar{X})^2} \sqrt{\sum ^n _{i=1}(Y_i - \bar{Y})^2}}. $$

However, according to the following University website, the formula is:


 * $$r = {\sum {(X - \mu_X)(Y - \mu_Y)} \over N\sigma_X\sigma_Y}$$

How can this be? - 114.76.235.170 (talk) 12:23, 12 August 2010 (UTC)
 * They're the same, since $$\sigma_X=\sqrt{\frac{\sum ^n _{i=1}(X_i - \bar{X})^2}{n}}$$. Algebraist 12:28, 12 August 2010 (UTC)
 * [ec] It's the same thing. The second version uses a somewhat odd notation, what they really mean is $$\sum_{i=1}^n {(X_i - \mu_X)(Y_i - \mu_Y) \over N\sigma_X\sigma_Y}$$. -- Meni Rosenfeld (talk) 12:31, 12 August 2010 (UTC)
 * Oh! Doh... silly me... thanks folks :-) P.S. thanks Meni, very helpful note on the notation. 114.76.235.170 (talk) 12:34, 12 August 2010 (UTC)
 * Er, btw, I stuffed up the math syntax... do you mean:


 * $$r = {\sum_{i=1}^n {(X_i - \mu_X)(Y_i - \mu_Y)} \over N\sigma_X\sigma_Y}$$


 * I'm quite new to the math syntax... - 114.76.235.170 (talk) 12:37, 12 August 2010 (UTC)
 * These, too, are the same thing - since $$N\sigma_X\sigma_Y$$ doesn't depend on i, you have
 * $$\sum_{i=1}^n {(X_i - \mu_X)(Y_i - \mu_Y) \over N\sigma_X\sigma_Y} = {\sum_{i=1}^n {(X_i - \mu_X)(Y_i - \mu_Y)} \over N\sigma_X\sigma_Y}$$.
 * I used the original version you wrote. -- Meni Rosenfeld (talk) 12:43, 12 August 2010 (UTC)
 * Oh brother, that's the second time I've done that... I asked almost exactly the same questions about a week ago about the sum of operator, I must have promptly forgotten/not noticed! Thanks though :-) 114.76.235.170 (talk) 14:05, 12 August 2010 (UTC)

I think we have a problem here. The definition above uses r, but that's meant to be used for the sample correlation coefficient! That formula above is for the population correlation coefficient. Shouldn't this be using &rho;, instead of r? - 114.76.235.170 (talk) 14:41, 12 August 2010 (UTC)
 * Hold the phone!

In fact, my textbook says that it is:


 * $$r = {n(\sum{xy})-(\sum{x})(\sum{y}) \over \sqrt{[n(\sum{x}^2)-(\sum{x})^2] \times [n(\sum{y}^2)-(\sum{y})^2]}}$$

What am I missing? - 114.76.235.170 (talk) 14:54, 12 August 2010 (UTC)
 * These formula work for either the sample coefficient (in which case r is appropriate) or for the population coefficient when the population is finite. I don't think the distinction is very important in practice, so it's possible some texts don't specify which they are talking about.
 * The formula in your textbook is - you've guessed it - the same as the other two formulas. Showing that this is so is a nice exercise. -- Meni Rosenfeld (talk) 15:44, 12 August 2010 (UTC)

Someone's not being entirely careful about the sample-versus-population issue. I'd use the Greek letters for the population means, not for sample means. Michael Hardy (talk) 17:04, 12 August 2010 (UTC)
 * I don't understand. This point has already been raised by the OP, and as far as I can see, all mentioned formula are for a sample mean (assuming the sums are over the sample), so our use of the Latin r is appropriate. -- Meni Rosenfeld (talk) 17:55, 12 August 2010 (UTC)
 * Meni Rosenfeld, $$\mu$$ is usually reserved for the population mean and $$\bar{X}$$ is often used for the sample mean. In the above, you appear to have assumed that they are equal. 018 (talk) 18:35, 15 August 2010 (UTC)

Quadrilateral proof
Imagine a random quadrilateral (not parallelogram, not kite etc.). Prove that the sum of 3 sides is longer than the remaining side.--Mikespedia is on Wikipedia! 17:32, 12 August 2010 (UTC)
 * It's meaningless to speak of random quadrilaterals until you specify the random distribution they are drawn from, there is no canonical choice here. Fortunately, it does not matter, since the property you mention holds for all quadrilaterals, and you can prove it using the triangle inequality.—Emil J. 17:43, 12 August 2010 (UTC)


 * Let the vertices be A, B, C, and D, connected by lines in that order. If the quadrilateral is convex, than you can always divide it in two triangles in two ways: either you can consider ABC and ACD, or you can consider ABD and BCD. If the quadrilateral is not convex, you can only construct one of these pairs. Hence, in any case, you can divide the quadrilateral in two triangeles in at least one way. Assume that you can divide it into ABC and ACD. By the triangle inequality, AB + BC > AC, and AC + CD > AD. Add these inequalities to obtain AB + BC + AC + CD > AC + AD, or AB + BC + CD > AD, Q.E.D. --Andreas Rejbrand (talk) 17:52, 12 August 2010 (UTC)
 * I think that's what the OP meant to say, without knowing the terminology - that it holds for an arbitrary quadrilateral and not only for some specific type. -- Meni Rosenfeld (talk) 17:57, 12 August 2010 (UTC)

As "they" say, "A straight line is the shortest distance between two points." That "remaining side" is a straight line. So there. Michael Hardy (talk) 23:34, 13 August 2010 (UTC)

Proof that P != NP
Could someone explain the proof in less-advanced terms? I'm not a TCS expert and the Wikipedia article only has two sentences about the proof. --70.134.48.188 (talk) 19:29, 12 August 2010 (UTC)
 * There is a question about this earlier on this page. In addition, there is StackOverflow. --Andreas Rejbrand (talk) 19:31, 12 August 2010 (UTC)
 * That was asking to explain P and NP. I know what P and NP are, but specifically, I need to understand the proof that P != NP. --70.134.48.188 (talk) 19:34, 12 August 2010 (UTC)
 * There is no proof yet. At least not one that's accepted widely. You've probably heard of the new paper by Vinay Deolalikar. See Vinay_Deolalikar for some words about it. Current consensus seems to be that there might be some interesting new ideas in the paper, but they probably do not constitute a valid proof. Typically it takes the experts several months at least to confirm or refute a proof like this, so we'll have to wait to be sure. Staecker (talk) 00:20, 13 August 2010 (UTC)
 * The best place to follow developments with this proof is Dick Lipton's blog, http://rjlipton.wordpress.com . IMHO the proof is basically toast at this point.  Yes it takes months to confirm a proof like this, but refuting can be much faster than confirming, as we're seeing.  67.122.209.167 (talk) 09:43, 13 August 2010 (UTC)
 * My brother wrote an explanation that I found helpful. Paul (Stansifer) 05:44, 14 August 2010 (UTC)
 * you should mention he's your brother, so we know that there is a personal factor. 92.230.66.177 (talk) 18:41, 18 August 2010 (UTC)

Strong Convergence of Operators
Ok, so a couple of us are studying for an upcoming analysis test and came upon a question which we couldn't answer so we appeal to higher authorities here. The question asks for an example of a Hilbert space H and a sequence of operators $$(S_n)$$ on H such that

i) $$||S_n|| \leq 1$$ for all n=1,2,3,...

ii) the operators $$V_N=\sum_{n=1}^N \frac{S_n}{n}$$ converge strongly as $$N\rightarrow\infty$$

iii) The strong limit of the operators $$V_N$$ is not compact.

Any help/hints would be greatly appreciated. Thanks! 174.29.63.159 (talk) 19:54, 12 August 2010 (UTC)
 * What's wrong with letting S1 be the identity and all the other Si be zero? Algebraist 21:38, 12 August 2010 (UTC)

Wow, I don't think I have ever felt dumber than this before.174.29.63.159 (talk) 01:39, 13 August 2010 (UTC)

Wait, sorry! The example you gave is perfect but I asked the wrong question. I wanted to ask for a sequence of COMPACT operators $$(S_n)$$ each with norm less than or equal to one such that their partial sums (properly scaled) converge strongly to something not compact. Thanks! 174.29.63.159 (talk) 04:53, 13 August 2010 (UTC)
 * What does strong convergence mean, then? If it means convergence in the operator norm, then this is impossible (a limit of compact operators is compact). If it just means pointwise convergence, then you can get a noncompact limit by exploiting the divergence of the harmonic series:
 * Let H be ℓ2, and let Pi be projection onto the ith basis vector. Let S1=P1, then let enough of the Si be P2 for the sum to contain a P2-component larger than 1: that is, since 1/2+1/3<1<1/2+1/3+1/4, we let S2=S3=S4=P2. Then let S5 through S12 all be P3 (since 1/5+1/6+1/7+1/8+1/9+1/10+1/11<1<1/5+1/6+1/7+1/8+1/9+1/10+1/11+1/12), and so on. The divergence of the harmonic series means we can do this for ever, and it's easy to see the resulting series converges pointwise. The limit has every basis vector as an eigenvector with value≥1, so cannot be compact. Algebraist 10:02, 13 August 2010 (UTC)

family of hyperplanes
What is the topology of the "space" of p-planes through the origin in n-space?

This question is motivated by the problem of choosing a basis for orthographic projection from n-space into 2– or 3-space; I want to avoid some parts of the solution-space, and to do that I need to know what the solution-space is! —Tamfang (talk) 21:46, 12 August 2010 (UTC)


 * This space is normally called a Grassmannian. That article mentions several ways of thinking about it. Algebraist 22:02, 12 August 2010 (UTC)
 * I read it (insofar as I can) but feel no wiser. Oh well. —Tamfang (talk) 05:44, 13 August 2010 (UTC)


 * Basically, your intuition of when two planes are close to each other is correct; with lines, for example, the smaller the angle between them, the closer they are. —Preceding unsigned comment added by 130.195.253.1 (talk) 23:20, 12 August 2010 (UTC)
 * Uh, yeah, I was pretty confident in that. —Tamfang (talk) 05:44, 13 August 2010 (UTC)
 * There is a definition for planes that's very similar to the one for lines.  Given planes A and B, we would like to define an "angle" between them.  The way to do this is as follows.  For every nonzero vector x in A, consider the least angle &alpha;x formed by x and an arbitrary nonzero vector y of B.  Now take the maximum of &alpha;x over all nonzero x in A.  This defines a metric on the space of all planes.  The underlying topology is what you want.  Tkuvho (talk) 10:59, 13 August 2010 (UTC)
 * Next time I want to be told what I already know, I'll know where to ask. —Tamfang (talk) 08:55, 15 August 2010 (UTC)




 * An engineer, a physicist, and a mathematician are discussing how to visualize four dimensions:


 * Engineer: I never really get it


 * Physicist: Oh it's really easy, just imagine three dimensional space over a time- that adds your fourth dimension.


 * Mathematician: No, it's way easier than that; just imagine $$\mathbb{R}^n$$ then set n equal to 4. PS  T  08:53, 13 August 2010 (UTC)
 * Okay, then here's a technical answer: Consider the space of injective linear maps from $$V^p$$ to $$V^n$$. Topologize this with the compact-open topology.  Identify two maps if they have the same image, and quotient out by this identification.  —Preceding unsigned comment added by 203.97.79.114 (talk) 09:41, 13 August 2010 (UTC)