Wikipedia:Reference desk/Archives/Mathematics/2011 June 11

= June 11 =

Proof of Inequality
How to prove the following inequality?

If $$p>0$$ is a real number, then there exists a constant $$C_p$$ dependent only on $$p$$ such that for all real numbers $$x,y\geq 0$$, we have the inequality:

$$x^p + y^p \leq C_p(x+y)^p$$

This isn't homework. I was reading a proof in a paper and I think this inequality is used and I'm wondering how to prove it. Can anyone enlighten me? — Preceding unsigned comment added by 49.2.4.186 (talk) 08:47, 11 June 2011 (UTC)
 * As stated, it isn't true. For $$p = 1/2$$, $$y = 1$$, there's no $$C_p$$ that works for all $$x$$.--203.97.79.114 (talk) 12:32, 11 June 2011 (UTC)
 * Sorry, ignore that. I just had a really dumb moment.--203.97.79.114 (talk) 12:43, 11 June 2011 (UTC)
 * Okay, trying again. First, if $$p \geq 1$$, then $$C_p = 1$$ will suffice: the inequality clearly holds for $$x = 0$$, and the derivative of the right (with respect to $$x$$) is always at least as much as the derivative of the left.
 * If $$p < 1$$, then $$C_p = 2$$ will suffice. Let's assume by symmetry that $$x \geq y$$.  If $$x = y$$, then the inequality becomes $$2x^p \leq 2\cdot 2^p x^p$$, and since $$p > 0$$, this holds.  Now we again take the derivatives with respect to $$x$$.  The left gets us $$px^{p-1}$$, while the right gives $$2p(x+y)^{p-1} \geq 2p(2x)^{p-1} > px^{p-1}$$.--203.97.79.114 (talk) 12:55, 11 June 2011 (UTC)
 * Excellent. Thanks! It didn't occur to me to differentiate. But do you think that this can be proven without differentiation. This is just out of curiosity since I understand your proof and am content with it but would like to know whether there is a nice trick that solves it. — Preceding unsigned comment added by 49.2.4.186 (talk) 02:04, 12 June 2011 (UTC)


 * I think perhaps an easier approach is to divide both sides by $$y^p$$, which converts the expressions to functions of a single variable equal to $$x/y$$. Looie496 (talk) 16:44, 11 June 2011 (UTC)

Alternative Proof
Does anyone know of a way to evaluate the following:
 * $$ \lim_{n \to \infty} \left( \sum_{k=1}^{n} \frac{1}{k^2} \right) $$

that does not involve Fourier series? — Fly by Night  ( talk )  21:48, 11 June 2011 (UTC)


 * By evaluating the double integral $$\int_0^1\int_0^1 \frac{dx\,dy}{1-xy}$$.  Sławomir Biały  (talk) 22:33, 11 June 2011 (UTC)
 * Sławomir, how does that double integral relate to the sum? I don't see how the dilog function relates to the sum in question in a simple, non-convoluted way. Could you show the steps for getting from the sum I gave to that double integral please? — Fly by Night  ( talk )  00:56, 12 June 2011 (UTC)
 * Expand the integrand as a geometric series and integrate it term-by-term. That gives your series.  The integral itself can be evaluated by rotating the coordinate system through an angle of $$\pi/4$$, which transforms the integrand into something of the form $$(1-x^2/2+y^2/2)^{-1}$$.  The resulting integral can be dealt with by elementary means.   Sławomir Biały  (talk) 02:00, 12 June 2011 (UTC)
 * Here is a link.  Sławomir Biały  (talk) 02:28, 12 June 2011 (UTC)
 * There is another elementary proof at our article Basel problem (along with Euler's proof, mentioned below, and a proof using Fourier series).  Sławomir Biały  (talk) 02:47, 12 June 2011 (UTC)
 * Excellent. How very cunning. Thanks for that. — Fly by Night  ( talk )  14:27, 12 June 2011 (UTC)


 * There's a description of Euler's proof here. AndrewWTaylor (talk) 23:25, 11 June 2011 (UTC)
 * Thanks. I'll take a look in the morning. The non-LaTeX font makes it hard to read at this time of night. — Fly by Night  ( talk )  00:56, 12 June 2011 (UTC)


 * See question 8 here: http://www.boardofstudies.nsw.edu.au/hsc_exams/hsc2010exams/pdf_doc/2010-hsc-exam-mathematics-extension-2.pdf . (I don't really know anything about Fourier series, but the proof which the student is led through doesn't explicitly mention them; judge for yourself whether they are "involved" implicitly.) — Anonymous Dissident  Talk 23:27, 11 June 2011 (UTC)
 * Thanks a lot for taking the time to find that link. It looks to me that Question 8 is basically asking people to compute the integrals involved in calculating the Fourier series. — Fly by Night  ( talk )  00:56, 12 June 2011 (UTC)

What the hell? Have you heard of Parseval's identity you guys? — Preceding unsigned comment added by 49.2.4.186 (talk) 02:08, 12 June 2011 (UTC)
 * The OP wants a proof that does not involve Fourier series.  Sławomir Biały  (talk) 02:23, 12 June 2011 (UTC)
 * Quite! — Fly by Night  ( talk )  14:27, 12 June 2011 (UTC)

Sure but I'm just wondering whether FBN (= "Fly by Night" are you an owl?) here knows Parseval's identity. Because he "claims" the integrals being computed in Question 8 are "basically those involved in calculating the Fourier series". I looked at that and it sure as hell doesn't look like that to me dudes. Maybe FBN needs to brush up on his theory of integration. BTW, see my PDF http://empslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf for some cool methods on how to sum this series. — Preceding unsigned comment added by 49.2.4.186 (talk) 00:58, 13 June 2011 (UTC)
 * Ask yourself this: If I didn't know that Fourier series could be used to evaluate the sum, and ergo know of Parseval's identity, then why would I ask for a proof that doesn't involve Fourier series? As for Question 8, I looked at that at 3 o'clock in the morning. The question involves terms of the form An and Bn, while integrating x2 multiplied by a trigonometric terms. Sound familiar? Thanks for the offer, but you can keep your "cool methods". According to the Exeter website, Robin Chapman lives and works in Exeter, but your IP is from Sydney, Australia. Are you sure it's your PDF? — Fly by Night  ( talk )  14:34, 13 June 2011 (UTC)
 * I think it's a valid point that the Question 8 argument is elementary in a way that an argument using properties of Fourier series is not. I'm not sure how one motivates this approach, though. Sławomir Biały  (talk) 14:45, 13 June 2011 (UTC)
 * I agree. I just didn't read it properly, saw all the Ans, Bns, and trigonometric and thought it was Fourier series. — Fly by Night  ( talk )  17:00, 13 June 2011 (UTC)

I'll tell you Parseval's identity. It's more conceptual to think of L^2 the space of measurable square-integrable functions on [-pi,pi] as a complex inner product space (Hilbert space). The idea is that the space has an orthonormal basis (general orthornormal basis can be found by wavelets) such as {e^{-in}} for n an integer. And if f is a function in L^2 you can take its inner product against each of these functions and get "components" of f. The point is, as you'd expect in any Hilbert space, the components are the only relevant data when it comes to taking inner products. So you can find the inner product (an integral) of two functions just by knowing their Fourier coefficients. Pretty cool math dude? So basically I advise you to find the Fourier coefficients of the function "x" on [-pi,pi] and take the inner product of this function with itself and apply Parseval. The PROOF of Parseval is probably well beyond the scope of your education; you'll see it when you get to university. It involves proving the completeness of the trigonometric system (in the L^2 case) which can be done nicely with convolution but I'll save the details to your lecturers at first year undergrad. uni. — Preceding unsigned comment added by 49.2.4.186 (talk) 09:31, 14 June 2011 (UTC)
 * Thanks, but you seem to miss the point of my question. I was interested in alternative proofs that did not involve Fourier series and, therefore, did not involve Parseval's identity. I'm fully aware of Parseval's identity; I first learnt it as a second year undergraduate in 2002. It's the only way I know of summing the sum I gave; that's why I titled the section Alternative Proof and went on to ask for a proof without Fourier series. I know you're new around here, so I recommend that you read WP:CIVIL before you make any more posts. Thanks for taking the time, but there's really no need to write lengthy posts about topics the OP has explicitly said they are not interested in. Thanks again. — Fly by Night  ( talk )  12:17, 14 June 2011 (UTC)
 * P.S. Don't forget to sign your posts with four tildes, like this: ~ . — Fly by Night  ( talk )  12:18, 14 June 2011 (UTC)
 * Your civility is unwarranted, Fly by Night. --72.179.51.84 (talk) 15:31, 14 June 2011 (UTC)

Here's a nice computation (sketch). Recall that $$(1+z/n)^n\to e^z$$ for complex $$z$$. Define polynomials $$p_n(x):=\frac{1}{2i}\Big( (1+ix/n)^n - (1-ix/n)^n \Big),$$ and note that $$p_n(x)\to \sin x.$$ Factorize $$p_n(x)$$ in linear factors. Keep track of the first coefficients of $$p_n$$ in terms of their roots, and equate the limits (this way you can also compute in elementary way $$\zeta(4),$$ $$\zeta(6)$$ etc.) Also, by the theorem of dominated convergence for products, the factorizations converge to an infinite product, so you also find the Euler product for $$\sin x$$. --pm a 19:27, 13 June 2011 (UTC)
 * Nice, thanks pma. — Fly by Night  ( talk )  12:24, 14 June 2011 (UTC)