Wikipedia:Reference desk/Archives/Mathematics/2009 March 7

= March 7 =

derivative of a series
Suppose we are given the series: $$u=\sum_{l=-\infty}^{\infty}\sum_{n=1}^{\infty}\epsilon^nU^{(l)}_ne^{il(kx-\omega t)}$$. Here $$u=u(U,x,t)$$ where $$U=U(\zeta,\tau)$$.

I wish to compute $$\frac{\partial^2u^2}{\partial x^2}$$. How do I find a closed form expression for the above series so that I can compute the desired derivative. Thanks--122.160.195.98 (talk) 06:28, 7 March 2009 (UTC)


 * Because $$\frac{d(f(x)+g(x))}{dx} = \frac{d(f(x))}{dx} + \frac{d(g(x))}{dx}$$, I suppose you just need to find the derivatives of the terms. Also, if you decompose the $$e^{f(x)}$$ which I wouldn't specify, I guess the series would turn into a sum of Fourier Series'. The Successor of Physics  07:20, 7 March 2009 (UTC)
 * How to compute u^2 is the main problem---OP —Preceding unsigned comment added by 122.160.195.98 (talk) 09:44, 7 March 2009 (UTC)
 * I suspect this is supposed to be an application of Parseval's theorem or the related Parseval's identity but I'm not up to working it out. 76.195.10.34 (talk) 12:53, 7 March 2009 (UTC)
 * You can't just blinding differentiate an infinite series term-by-term. That requires uniform convergence. --Tango (talk) 14:00, 7 March 2009 (UTC)
 * To OP: From my comment you should see that if differentiating once is ok, then just differentiate it again! The Successor of Physics  15:05, 7 March 2009 (UTC)


 * To Tango: I don't understand, Tango, why uniform convergence is required. The Successor of Physics  15:05, 7 March 2009 (UTC)
 * It just is. If the series doesn't converge uniformly then its derivative may well be different from the term-by-term derivative. See Uniform convergence. You can't just assume that limits behave as you want them to behave, you have to actually prove it. --Tango (talk) 15:11, 7 March 2009 (UTC)
 * Uniform convergence of the series isn't enough, actually. You need uniform convergence of the term-by-term derivative, as our article states. Algebraist 15:18, 7 March 2009 (UTC)
 * Strictly speaking, I never said it was! ;) I couldn't remember the exact theorem (I tend to avoid analysis where possible), so was careful to speak vaguely. --Tango (talk) 15:25, 7 March 2009 (UTC)
 * I am the OP (from a different IP). This problem is from a paper I am reading and the author has assumed all necessary convergence conditions. The main problem is how to compute u^2. Everything else will come after that--118.94.73.74 (talk) 15:42, 7 March 2009 (UTC)
 * To compute u^2 you need to multiply two infinite series: see Cauchy product for how to do that.  Your situation is a bit messy but a direct extrapolation from the description in the article.  Eric.  131.215.158.184 (talk) 04:20, 8 March 2009 (UTC)

confusion about the proof of the splitting lemma in the wikipedia article
I have been reading the proof of the splitting lemma in the wikipedia article of that name and was wondering if anyone could help me to understand the very first part

at the very start of the proof to show that 3.(direct sum) implies 1.(left split) they take t as the natural projection of   (A×C) onto A, ie. mapping (x,y) in B to x in A now why does this satisfy the condition that tq is the identity on A. similarily to show that 3. implies 2. they take u as the natural injection of C into the direct sum of A and C (A×C) ie. mapping y in C to (1,y) how does this satisfy the condition that ru is the identity on C.

It would apear to me that they mean something else by the "natural" projection and injection but i cant see what this would be??? thanks for your help and im sorry if this is badly worded —Preceding unsigned comment added by Jc235 (talk • contribs) 16:44, 7 March 2009 (UTC)
 * 3. States not only that B is the direct sum of A and C, but also that the maps in the SES are the obvious ones: that q is the natural injection from A to A×C, i.e. q(a)=(a,0) and that r is the natural projection from A×C to C, i.e. r(a,c)=c. Then, setting t to be the natural projection of A×C onto A, we have that tq(a)=t(a,0)=a for all a in A. Similarly, if u is the natural injection of C into A×C, then ru(c)=r(0,c)=c for all c in C. (note: for the case of a general abelian category, talking about elements like this makes no sense, but this suffices for concrete examples) Algebraist 17:07, 7 March 2009 (UTC)

Confusion about differentiability & continuity
Hi there - I'm looking at the function $$e^{-\frac{1}{x^2}}$$ - the standard example for an infinitely differentiable non-analytic function - and I'm wondering exactly how you prove that the function has zeros at x=0 for all derivatives. In general, is it invalid to differentiate the function as you would normally would (assuming nice behaviour) to get, in this example, $$\frac{2e^{-\frac{1}{x^2}}}{x^3}$$, and then simply say it may (or may not) be differentiable at the 'nasty points' such as x=0? Or are there functions which have a derivative which is defined for well behaved points, but also has a well-defined value for non-differentiable points? Apologies if that made no sense. How would you progress then to show that all $$f^{(k)}(0)$$ are equal to 0? Do you need induction?

On continuity, I'm using the definition effectively provided by Heine as in - i.e. that a function is continuous at c if $$lim_{x \to c}f(x)=f(c)$$: but what happens if both the function and the limit of the function are undefined at a single point and continuity holds everywhere else? Because it's not like the function has a -different- limit to its value at c explicitly, they're just both undefined - does that still make it discontinuous? Incidentally, I'm thinking of a function like $$\frac{sin(1/x)}{x^2}$$, where (if I'm not being stupid) both the function and its limit are undefined at 0, but wondering more about the general case - would that make it discontinuous?

Many thanks, Spamalert101 (talk) 21:23, 7 March 2009 (UTC)Spamalert101
 * Firstly, $$e^{-\frac{1}{x^2}}$$ is not defined at 0, any more than $$\frac{sin(1/x)}{x^2}$$ is. In the case of $$e^{-\frac{1}{x^2}}$$, however, there is a unique continuous function f on the real line extending the given function, and it's a standard abuse of notation to call this function (which takes the value $$e^{-\frac{1}{x^2}}$$ for x not zero and the value 0 at x=0) by the same name. Once you've done that, showing that f has derivative 0 at 0 is just a matter of applying the definition of differentiation: you have to show that $$\frac{e^{-\frac{1}{x^2}}}{x}$$ tends to zero as x tends to zero. At all other points, the chain rule (which only requires differentiability at the points involved; no nicer behaviour is needed) tells you that the derivative is $$\frac{2e^{-\frac{1}{x^2}}}{x^3}$$. Thus the derivative of f is the function g where g(x)=$$\frac{2e^{-\frac{1}{x^2}}}{x^3}$$ for x not zero and g(0)=0. Then you can show that g is also differentiable at 0 with derivative 0. Showing that f is infinitely differentiable at 0 requires some sort of induction. Inducting on the statement 'the nth derivative of f is 0 at zero and is some rational function times $$e^{-\frac{1}{x^2}}$$ elsewhere' ought to work. Algebraist 21:32, 7 March 2009 (UTC)
 * On your second question, the function f from R\{0} to R given by f(x)=$$\frac{sin(1/x)}{x^2}$$ is a continuous function. It can't be extended to a function continuous at 0, though, so by an abuse of terminology it might be thought of as being a function on R discontinuous at 0. Algebraist 21:35, 7 March 2009 (UTC)
 * Or, without an abuse of terminology, you can just define f(0)=0, or whatever, and then it simply is a function continuous everywhere except 0. --Tango (talk) 21:53, 7 March 2009 (UTC)


 * Let me add this. In order to prove that a $$C^\infty$$ function $$f:\R\setminus\{0\}\to\R$$ extends to a $$C^\infty$$ function on $$\R$$ it is necessary and sufficient that all derivatives of f have limit as $$x\to 0$$, which is easily seen to be the case of exp(-1/x2). One applies repeatedly the following well-known lemma of extension of differentiability, which is a simple consequence of the mean value theorem: "Let f be a continuous function on $$\R$$, differentiable in $$\R\setminus\{0\}$$; assume that there exists the limit $$\lim_{x\to\ 0}f^\prime(x)=l$$. Then f is differentiable at x=0 too, and $$f^\prime(0)=l$$" (going back to f(x):=exp(-1/x2), the thing works because the k-th derivative of f has the form mentioned above by Algebraist, that is f(x) times a rational function). --pma (talk) 23:54, 7 March 2009 (UTC)

Ah that's wonderful, thanks so much for all the help. As a matter of interest, I see the wikipedia article on 'an infinitely differentiable non-analytic function' gives that the analytic extension to $$\mathbb{C}$$ has an essential singularity at the origin for $$z \mapsto e^{\frac{-1}{z}}$$ - is the same true with z^2 in lieu of z? Also, another related article hints at 'via a sequence of piecewise definitions, constructing from $$e^{\frac{-1}{x^2}}$$ a function g(x) with g(x)=$$0: x \leq -2, 1: -1 \leq x \leq 1, 0: x \geq 2$$ and g is infinitely differentiable - I've managed to construct such a piecewise function using $$\frac{x^2}{(1-x)^2}$$ and stretches, translations etc of that, but how would you go about it with $$e^{\frac{-1}{x^2}}$$? Is there a nice way to create such a function?[User:Spamalert101|Spamalert101]] (talk) 17:21, 10 March 2009 (UTC)Spamalert


 * Yes. — Emil J. 17:29, 10 March 2009 (UTC)
 * Yes, the essential singularity persists with z^2. You can see this by considering the value of the function for z small and either real or pure imaginary: for real z, the function goes to 0, while for pure imaginary z, it goes to infinity. The piece-defined functions you describe are normally called bump functions, and that article gives one way of constructing them. How did you manage to construct such a function with $$\frac{x^2}{(1-x)^2}$$? It seems impossible to me. Algebraist 01:36, 11 March 2009 (UTC)

Oh, did I make a mistake then? I sincerely doubt my own knowledge over yours! When would it become non-differentiable? I had the function so it tends to infinity on either side of ±3/2 but to 1 at ±1 and 0 at ±2 by defining the functions piecewise as above so they had a derivative which tended to 0 as x tended to ±1 or 2, for example $$1+\frac{(2(x-1))^2}{(1-2(x-1))^2}$$ on $$1 \leq x \leq \frac{3}{2}$$- would it become non-differentiable at some point then? Spamalert101 (talk) 06:39, 11 March 2009 (UTC)Spamalert101