Wikipedia:Reference desk/Archives/Mathematics/2010 January 5

= January 5 =

Convergence
I am working through a proof and I am at a part that might be pretty simple but I am a bit confused. It's probably stuff I understood well a few months ago and haven't worked on since and now I feel silly. Here is the expression I am working with, and what the proof says right after it:
 * $$\sum_{n=1}^\infty \frac{1}{n^{2 + 2\epsilon}} + \sum_{m=1}^\infty \sum_{n=-\infty}^\infty \bigg[ \frac{1}{(mz + n)^2 |mz + n|^{2\epsilon}} - \int_n^{n+1} \frac{dt}{(mz + t)^2 |mz + t|^{2\epsilon}} \bigg], \qquad \epsilon > 0, z \in \mathbb{H} \text{(upper half-plane)}.$$

Both sums on the right converge absolutely and locally uniformly for $$\epsilon > -\frac{1}{2}$$ (the second one because the expression in square brackets is $$O(|mz + n|^{-3 - 2\epsilon})$$ by the mean-value theorem, which tells us that $$f(t) - f(n)$$ for any differentiable function $$f$$ is bounded in $$n \leq t \leq n + 1$$ by $$\max_{n \leq u \leq n + 1} |f'(u)|$$), so the limit of the expression on the right as $$\epsilon \to 0$$ exists and can be obtained simply by putting $$\epsilon = 0$$ in each term.

So, I get the first sum, that's easy. The second one confuses me. I think I understand partially. I believe I understand the mean value theorem part as we would have a $$c$$ such that $$f'(c) = \frac{f(t) - f(n)}{t - n}$$ and $$t - n \leq 1$$. So, for that $$c$$, we have $$f(t) - f(n) \leq (t - n)f'(c) \leq f'(c)$$. Then, the derivative of the integral is 0 and so the derivative of the thing in square brackets is just the derivative of the other term. I guess I definitely don't understand why showing the limit exists tells us we can just plug in $$\epsilon = 0$$. I also don't understand if the double sum screws things up. Thanks. StatisticsMan (talk) 03:00, 5 January 2010 (UTC)


 * Maybe a simpler way to say it is $$\left|f(n) - \int_n^{n+1} f(t) dt\right| \leq \int_n^{n+1}|f(n) - f(t)| dt \leq \sup_{n \leq u \leq n + 1} |f(n) - f(u)| \leq \sup_{n \leq u \leq n + 1} |f'(u)|.$$
 * The fact that there's a double sum does come into play, but you can show that $$\sum_a \sum_b \frac{1}{|a+b|^s}$$ converges for s > 2, so you're still fine there with s = 3 + 2ε.
 * For arguing that thing is continuous in ε near zero, I'm pretty sure there must be some sort of theorem about the continuity of a series of continuous functions that are bounded by an absolutely convergent series or something like that. I'm not really sure exactly what it is, but it shouldn't be too hard to show if you don't have anything like that.  The sum of all but finitely many terms is small and a finite sum of continuous functions is continuous.  Probably someone will come along with a better informed answer to that. Rckrone (talk) 06:10, 5 January 2010 (UTC)
 * Yes, you are talking of the so called Weierstrass M-test. --pm a (talk)  13:48, 5 January 2010 (UTC)


 * Does the fact that our function is complex-valued and not real-valued affect the mean value theorem? StatisticsMan (talk) 14:58, 6 January 2010 (UTC)
 * No, because the mean value theorem in the form of the inequality $$\|f(b)-f(a)\|\leq (b-a)\sup_{a<t<b} \|f^{\,\prime}(t)\|$$ holds true for any normed space valued function f continuous on the interval [a,b] and differentiable on (a,b) (BTW even continuous and differentiable up to a countable set is OK, and even absolutely continuous and differentiable a.e. is OK). What is no longer true for vector valued functions, even for E = R2, is that there is a a 2. Is this supposed to say $$\sum_a \sum_b \frac{1}{|az+b|^s}$$ (adding in the z)??? That is what we have in this situation at least and I'm pretty sure it is true. I know the Weierstrass M test for a single sum and it seems pretty clear that we can extend it to 2 sums using the 1 sum version because we can just take the inside sum of constants as a new constant. So, we can use that here. But, I guess I don't understand how that helps. This gives us uniform convergence of continuous functions (in t), so that what it converges to is continuous (in t). How does that help us plug in a certain value of $$\epsilon$$? StatisticsMan (talk) 15:47, 18 January 2010 (UTC)

Let's define, for $$(z,\epsilon)\in \mathbb{H}\times(-1/2\,,+\infty)$$ and for $$(n,m)\in\Z\times\Z_+$$

$$a_{n,m}(z,\epsilon):=\frac{1}{(mz + n)^2 |mz + n|^{2\epsilon}} - \int_n^{n+1}\frac{dt}{(mz + t)^2 |mz + t|^{2\epsilon}}.$$

For any $$(z,\epsilon)$$ we have, by the mean value theorem (see Rckrone bound)

$$|a_{n,m}(z,\epsilon)|\leq(2+2|\epsilon|)\sup_{n\leq t\leq n+1}|mz+t|^{-3-2\epsilon}.$$

We wish to show that the family $$\{a_{n,m}\}_{(n,m)\in\Z\times\Z_+}$$ is locally normally summable in the uniform norm, that is, for any $$(z,\epsilon)\in\mathbb{H}\times(-1/2\,,+\infty)$$ there exists a neighborhood $$U$$ of $$(z,\epsilon$$) such that

$$\sum_{(n,m)\in\Z\times\Z_+}\|a_{n,m}\|_{U,\infty}<+\infty.$$

This implies that the double sum

$$\sum_{(n,m)\in\Z\times\Z_+} a_{n,m}(z,\epsilon)$$

converges uniformly to a continuous function on $$\mathbb{H}\times(-1/2\,,+\infty).$$

Consider an open covering of $$\mathbb{H}\times(-1/2\,,+\infty)$$ by open sets of the form

$$U_{a,b,c,d}:=\{(z,\epsilon)\in\Complex\times\R\;:\; |\mathrm{Re}(z)|b,\quad c<\epsilon 1>b>0,\;$$ and  $$-1/2<c<1<d.$$ Let $$\,\textstyle U:=U_{a,b,c,d}$$ be one of these.

It is convenient to bipartite the set of indices into the subsets:

$$J_1:=\{(n,m)\in\Z\times\Z_+\;:\; -2(ma+1)/3<n<2ma\}$$

$$J_2:= \Z\times\Z_+\setminus J_1.$$

Therefore, for any $$m\in\Z_+$$ there are at most $$\left\lfloor\frac{8ma+2}{3}\right\rfloor$$ values of $$n\in\Z$$ such that $$(n,m)\in J_1,$$ and in any case $$n=-1,0,1$$ are among them. Since for $$(z,\epsilon)\in U$$ and $$n\leq t \leq n+1$$ we have $$|mz+t|\geq mb,$$ we can bound the sum on $$J_1$$ as follows:

$$\sum_{(n,m)\in J_1}\|a_{n,m}\|_{U,\infty}\leq (2+2d)b^{-3-2d}\sum_{m=1}^\infty \frac{8ma+2}{3}m^{-3-2c}<+\infty.$$

On the other hand, for all $$(n,m)\in J_2,\,$$ either $$ n\geq 2ma$$ or $$n\leq -2(ma+1)/3.$$ In both cases, for any $$(z,\epsilon)\in U$$ and $$n\leq t \leq n+1$$

$$|mz+t|^2=(m\mathrm{Re}\,z+t)^2+m^2b^2\geq (n/2)^2 +m^2b^2\geq 1.$$

Note that for any $$(n,m)\in J_2$$ one has $$|n|\geq 2$$ so $$n^2/4\geq1$$ and the last inequality holds.

Thus

$$\sum_{(n,m)\in J_2}\|a_{n,m}\|_{U,\infty}\leq (2+2d)\sum_{(n,m)\in J_2} \left(\frac{n^2}{4} + m^2b^2\right)^{-3-2c}<+\infty.$$ --pm a 03:16, 20 January 2010 (UTC) PS: I re-edited in my talk page a more corrected version (here there are minor defects and tyops) --pm a  14:39, 20 January 2010 (UTC)

Question on combinations
I suspect there's an easy formula for figuring this out, but I can't figure out what it is:

Given n teams in a league (assume n is even), how many possible opening day match-ups are there? The order of each match-up determines which is the home team, so Team-A vs. Team-B is different from Team-B vs. Team-A. However, it doesn't matter in which order the matches are listed: A v B and C v D is the same as C v D and A v B.

It's not a simple combination, nor a permutation that I can figure. Any ideas? –RHolton ≡ – 17:40, 5 January 2010 (UTC)


 * The first team has n-1 choices for who to play and then 2 choices for who is the home team. Once that's decided, the next team on the list that isn't already scheduled for a game has n-3 choices for who to play and 2 choices for who is the home team.  The next has n-5 choices, etc.  So there are $$\prod_{k=1}^{n/2}2(2k-1)$$ possibilities assuming everyone plays. Rckrone (talk) 17:57, 5 January 2010 (UTC)

First find the number of (unordered) partitions of a set of size n into sets of size 2. Then you can multiply that by 2n/2 (2 choices for each of n/2 pairs) to get the number you're looking for. To be continued..... Michael Hardy (talk) 19:23, 5 January 2010 (UTC)
 * ...and here is something possibly somewhat relevant.
 * But I think Rckone's answer may be enough for your purpose. Michael Hardy (talk) 19:26, 5 January 2010 (UTC)
 * But I think Rckone's answer may be enough for your purpose. Michael Hardy (talk) 19:26, 5 January 2010 (UTC)


 * Also, if N is such a number, N(n/2)!=n! (permuting the n/2 pairs in each of the N sets of pairs you get every permutation of the n teams, each once).--pm a (talk)  19:54, 5 January 2010 (UTC)
 * Also you may first choose the subset of n/2 home teams, and then select a bijection with its complement (there are of course as many such bijections as there are (n/2)-permutations); and of course the result is again Rckrone's formula.--pm a (talk)  20:03, 5 January 2010 (UTC)

I like Maple's answer for Rckrone's formula. Since n is even, let's assume that n = 2m where m is a positive integer. Then Maple gives
 * $$ \prod_{k=1}^m 2(2k-1) = \frac{4^m}{\sqrt{\pi}}\,\Gamma\!\left(m+\frac{1}{2}\right), $$

where &Gamma; is Euler's Gamma function. More beautiful, but far less applicable. Dr Dec (Talk)  20:15, 5 January 2010 (UTC)

Möbius Maps
Hi. I am trying to find the group G of Möbius transformations that map the set {0,1,$$\infty$$} onto itself. Now, I have looked through my notes (this is a course on group theory incidentally) and have found the general function to be $$g(z) = \frac{z - z_0}{z- z_{\infty}} \frac{z_1 - z_{\infty}}{z_1 - z_0} $$, where you choose your $$z_i$$ accordingly. Problems obviously arise though when you pick one z to be $$\infty$$; indeed, what meaning does $$g(z) = \frac{z - z_0}{z- {\infty}} \frac{z_1 - {\infty}}{z_1 - z_0} $$? So, to solve this problem my lecturer told us that, in the case $$z_{\infty}=\infty$$ for example, rewrite your function as $$g_1(z) = \frac{z-z_0}{z_1-z_0}$$, which obviously takes $$z_0$$ to 0 and $$z_1$$ to 1. What I don't understand is how this is, in general, supposed to take $$z_{\infty}$$ to $${\infty}$$. How does this work when $$z_0=1$$ and $$z_1=0$$? By my logic, for this case $$g(z_{\infty})=-\infty$$, most definitely not a result I want. Can anyone help me out here? Thanks 92.0.129.48 (talk) 20:17, 5 January 2010 (UTC)
 * Check this. Does it give you a clue? Also note that $$\infty$$ here is the point that compactifies the complex plane to get the Riemann sphere. There's no $$-\infty$$ (or if you like, it coincides with $$\infty,$$ and is one of the two fixed points of the involution $$ z\mapsto -z$$).--pm a  (talk)  20:38, 5 January 2010 (UTC)


 * If $$z_{\infty}=\infty$$ then you are restricting yourself to the sub-set of Möbius maps that fix $$\infty$$ - these are the affine maps $$g(z) = az+b$$. To map $$z_0$$ to 0 and $$z_1$$ to 1 then $$a=\frac{1}{z_1-z_0}$$ and $$b=\frac{z_0}{z_0-z_1}$$, so you have $$g(z) = \frac{z-z_0}{z_1-z_0}$$, as you said. In particular, if $$z_0=1$$ and $$z_1=0$$ then $$g(z)=1-z$$. Geometrically, in the complex plane, this is a rotation through 180 degrees about the point $$z=\frac{1}{2}$$. Gandalf61 (talk) 08:27, 6 January 2010 (UTC)


 * A maybe-obvious-but-maybe-worth-to-recall remark. Since a Moebius map is fixed by the image of three points, it should be clear that the subgroup G is isomorphic to the symmetric group S3. It is generated e.g. by the maps 1-z and 1/z, the map 1-z corresponding to a transposition (0,1)(∞), and 1/z corresponding to (0,∞)(1). You may enjoy finding the other 4 maps by means of compositions of these, together with the associated permutations of {0,1,∞}. --pm a  10:28, 6 January 2010 (UTC)


 * Thank you both for your help. On a related note though, what exactly is the order of a Möbius map f(z)? Is it just how many times must you apply f to reach the identity? Thanks 92.0.129.48 (talk) 19:45, 6 January 2010 (UTC)
 * Yepp. Note that order has a lot of meanings in maths; but here the group theoretic one (that is the one you mention) is I think the only reasonable one. --pm a 22:34, 6 January 2010 (UTC)

NP and NPC
Just wondering, because it is not specifically stated in the NP and NP-complete articles... Does a solution to an NP problem imply that all NPC is solvable? If it is proven that there is no solution to an NP problem, does it imply that there is absolutely no solution to all NPC problems? -- k a i n a w &trade; 21:50, 5 January 2010 (UTC)
 * I think you have the wrong idea about what NP means. All P problems are automatically NP.  Problems that are not NP are harder than NP problems, not easier. --Trovatore (talk) 21:59, 5 January 2010 (UTC)


 * I wasn't wondering about NP and P. I was wondering if the following statement is reversible... Solving (in polynomial time) a single NP-complete problem proves that there is a polynomial-time solution to all NP problems.  By "reversible", I mean is the following true: Proving that there is no polynomial time solution to an NP problem proves that there is no polynomial-time solution to any NP-complete problem. --  k a i n a w &trade; 22:09, 5 January 2010 (UTC)
 * Sure. That's just the contrapositive of the original statement.  You don't need to know anything about P and NP for that. --Trovatore (talk) 22:13, 5 January 2010 (UTC)
 * That is what I thought, but I wasn't certain that there wasn't some rarely mentioned snag in the whole thing that made the contrapositive incorrect. -- k a i n a w &trade; 22:23, 5 January 2010 (UTC)
 * For what it's worth, your statement is a more precise fit for "NP-hard" than for "NP-complete"; NP-complete is just the intersection of NP-hard and NP. (In case it's confusing not to state it explicitly, NP-hard is not a subset of NP.)  --Tardis (talk) 22:30, 5 January 2010 (UTC)