Wikipedia:Reference desk/Archives/Mathematics/2011 April 22

= April 22 =

Integration by parts for improper integrals
I'm wondering what the formal justification is for integration by parts when the integral is improper. For example, we can't integrate xe^x by parts from -infinity to infinity because it doesn't converge. We need some assumptions on convergence of the integrals.

My question is what exactly are the assumptions on the integrals? So under what conditions can we argue that for two real valued functions on the real line f and g the integral from -infinity to infininty of

f'g

is fg minus the integral from -infinity to infinity of

fg'?

This isn't a homework question. I'm just interested in knowing when I can apply this rule because I'm studying Schwartz functions on Euclidean space and need to compute the fourier transform of a partial derivative of a Schwartz function with respect to a multindex.

Could you please give proofs of your claims when possible? My guess would be that the formula above holds if f and g are continuously differentiable and for example one has bounded derivativ and the other is in L^1 (see Lp space). I'm not sure how to prove this and whether there are other such formulations.

Thanks. —Preceding unsigned comment added by 180.216.2.24 (talk) 03:46, 22 April 2011 (UTC)
 * A "properly improper" integral is a limit:
 * $$ \lim_{b\to\infty} \int_a^b f(x)\,dx. $$
 * So apply integration by parts to the integral from a to b, and then evaluate the limit. For example, in
 * $$ \int_0^\infty x^3 e^{-x}\,dx, $$
 * you have
 * $$ \left.uv\right|_{x:=0}^{x:=b} - \int_0^b v\,du $$
 * and then you use L'Hopital's rule to find the first of the two limits as b &rarr; &infin;. Michael Hardy (talk) 04:52, 22 April 2011 (UTC)


 * The point is that integration by parts allows us to calculate the anti-derivatives, which are functions. In fact, the general formula involves an improper interval indefinite integral, i.e. one without limits. It follows from the product rule. Consider u(x) and v(x). By the product rule:
 * $$\frac{\operatorname{d}(uv)}{\operatorname{d}x} = \frac{\operatorname{d}u}{\operatorname{d}x}\, v + u \, \frac{\operatorname{d}v}{\operatorname{d}x} \, . $$
 * We then integral both sides with respect to x:
 * $$ uv = \int \! v \, \operatorname{d}u + \int \! u \, \operatorname{d}v = \int \! v(x)u'(x) \, \operatorname{d}x + \int \! u(x)v'(x) \, \operatorname{d}x \, . $$
 * There's no mention of any limit. These are just formal functional expressions. You need to make sense of the limits, if you use them. But that's just the same as having to be careful of the function 1/x when x = 0. You can integrate xex by parts. In fact you get (x&thinsp;−&thinsp;1)ex&thinsp;+&thinsp;c. In fact, by using integration by parts we can prove that, for a positive integer n,
 * $$ \int \! x^ne^x \, \operatorname{d}x = \sum_{k=0}^n \frac{n! \, x^{n-k}}{(n-k)!}e^x + c \, . $$
 * Again, there are no limits, only anti-derivatives. — Fly by Night  ( talk )  16:15, 22 April 2011 (UTC)
 * An integral with no limits is indefinite, not improper. --Tango (talk) 16:00, 23 April 2011 (UTC)
 * Thanks Tango, but I didn't mention improper integrals, I mentioned improper intervals. I'm not sure what one of those is either :-) — Fly by Night  ( talk )  17:52, 25 April 2011 (UTC)

Sometimes you can't use partial integration for a integral that does converge, because the limit doesn't exist and the integral you get doesn't converge. Then, you need to compute the limit of the two terms added up together, which can be awkward. Instead, what is often much easier, is to intrduce some parameter in the integrand such that the divergences don't occur when the parameter is in some region of the complex plane. You then compute the integral for that case and the integal for general values of the parameter can then often be found by analytic continuation. Count Iblis (talk) 17:12, 22 April 2011 (UTC)

Differentiating under the integral sign
Hi again guys. I'm interested in the formal justification for differentiating under an integral sign. So if f is a function of two variables x and t under what conditions is it true that

d(int f(x,t) dx)/dt = int ( (partial derivative of f(x,t) with respect to t) dx)

See Differentiation under the integral sign. I think I know how this should be true when this is a definite integral over a bounded set. Then we can argue by Lebesgue's dominated convergence theorem assuming the partial derivative of f with respect to t is a continuous function in t. This is because (f(t+h)-f(t))/h - (partial derivative of f with respect to t) will be dominated by a constant which is an L^1 function on a bounded interval and we can interchange integration and limit to show this goes to 0 as h goes to 0.

This argument isn't so hot when the interval is unbounded because I'm not sure what L^1 function dominates a difference quotient of f minus the partial derivative of f in that case. Constants do it if the partial derivative of f is uniformly continuous (with respect to t) but a constant isn't an L^1 function on the line.

Can you please state a formal theorem with the appropriate hypothesis when you can justify differentiating under the integral sign for improper integrals? Any proof is OK but I'd prefer a proof using the Lebesgue dominated convergence theorem though any proof is OK.

Thanks. —Preceding unsigned comment added by 180.216.2.24 (talk) 03:55, 22 April 2011 (UTC)

Weak Nulstellensatz
Hi again guys. Sorry for all the questions. This is the last one. I know the weak nulstellensatz but I'm not sure how to prove the converse. Specifically, why is it true that if K is a field and (a_1,...,a_n) is an n-tuple of elements in K then the ideal (generated by x_1-a_1, ..., x_n - a_n):

(x_1 - a_1,..., x_n - a_n) in K[x_1,...,x_n]

is maximal? I know that it is contained in the kernel of the evaluation homomorphism that takes a polynomial in x_1,...,x_n to its value at (a_1,...,a_n) but how do we show equality in this inclusion? That is, how do we show that a polynomial in n variables that vanishes at (a_1,..,a_n) is in the ideal (x_1 - a_1,...,x_n - a_n). Thanks guys. I really appreciate your help. :) —Preceding unsigned comment added by 180.216.2.24 (talk) 03:58, 22 April 2011 (UTC)


 * The proof of the last question: How do we show that a polynomial in n variables that vanishes at (a1,&hellip;,an) is in the ideal (x1 – a1,&hellip;,xn – an). Well, I think that that can be proved using Hadamard's lemma. Although it might only be valid when the field is R or C; I'm not sure. — Fly by Night  ( talk )  17:18, 22 April 2011 (UTC)


 * Factoring out by the ideal $$(x_1 - a_1,..., x_n - a_n)$$ identifies the $$x_i$$ with the scalars $$a_i\in K$$. Hence the composite
 * $$K\subset K[x_1,\dots,x_n]\to K[x_1,\dots,x_n]/(x_1-a_1,\dots,x_n-a_n)$$
 * is surjective. But, since K is a field and 1 is clearly not in the ideal $$(x_1 - a_1,..., x_n - a_n)$$, this is also injective, and so is an isomorphism.   Sławomir Biały  (talk) 00:45, 23 April 2011 (UTC)

Cool, thanks Slawomir! I should have observed that because anyway the Noether normalization lemma implies the weak nulstellensatz and exactly the same idea of the proof of this implication is about what you (Slawomir) said. —Preceding unsigned comment added by 180.216.2.24 (talk) 01:30, 23 April 2011 (UTC)

Fourier transform of Schwartz functions
Can anyone give a brief overview of why it is fundamentally convenient to study the Fourier transform on the Schwartz functions and then extent it to other functions spaces like L^1? I understand that convergence problems with integrals are surpassed because Schwartz functions decay really rapidly at infinity but what other uses does this have? —Preceding unsigned comment added by 180.216.2.24 (talk) 04:01, 22 April 2011 (UTC)


 * Our article on Schwartz functions gives some properties. The key property though is probably that the Fourier transform of a Schwartz function is another Schwartz function. This does not hold for other spaces like $$L^2$$. Invrnc (talk) 12:59, 22 April 2011 (UTC)


 * Everything you wished was true of the Fourier transform actually is true of the Fourier transform of Schwartz functions. It's given as an integral (unlike the FT on L^2), and its inverse is also an integral (unlike the FT on L^1).  It is continuous on the Schwartz topology (like continuity on L^2).  You can differentiate under the sign of the integral.  It is easy to show that it satisfies the convolution identity.  Finally, it allows you to define the Fourier transform of any tempered distribution by taking the transpose.   Sławomir Biały  (talk) 13:03, 22 April 2011 (UTC)

Algebraic Topology
How to compute the homology groups &Betti numbers of the 2-shere Mathematics2011 (talk) 16:01, 22 April 2011 (UTC)


 * You can use the Mayer-Vietoris sequence. Alternatively, triangulate the sphere and compute the homology groups combinatorially.  Sławomir Biały  (talk) 16:12, 22 April 2011 (UTC)


 * Alternatively, you could use some Morse homology. Imagine the sphere sat on the xy-plane and consider the height function, given by h(x,y,z) = z, restricted to the sphere, i.e. S2 : S2 → R. The local maxima, the saddle points, and the local minima give us the homology groups. The North Pole is the only local maximum, there are no saddle point, and the South Pole is the only local minimum. That tells us that the homology groups are
 * $$ H_0({\mathbf S}^2,\mathbf{Z}) \cong \mathbf{Z}, \ H_1({\mathbf S}^2,\mathbf{Z}) \cong \{0\}, \ H_2({\mathbf S}^2,\mathbf{Z}) \cong \mathbf{Z} \, . $$
 * The same method works for the torus, and in fact for an compact, orientable surface of genus g. This tells us that the homology groups over Z are
 * $$ H_0 \cong \mathbf{Z}, \, H_1 \cong \mathbf{Z}^{2g}, \, H_2 \cong \mathbf{Z} \, . $$
 * The Betti numbers are given by the ranks of the homology groups. In the case of the orientable, compact surface of genus g, the Betti numbers are 1, 2g and 1. The Euler characteristic of an orientable, compact surface of genus g, say M, is given by the alternating sum of the Betti numbers; i.e. &chi;(M) = 1 − 2g + 1 = 2 − 2g. — Fly by Night  ( talk )  19:16, 22 April 2011 (UTC)

thank you very much...Mathematics2011 (talk) 08:29, 24 April 2011 (UTC)