Wikipedia:Reference desk/Archives/Mathematics/2011 December 17

= December 17 =

Induction and integration by parts
Consider the proposition P(n):
 * $$\int_0^x \frac{f(u)(x-u)^n}{n!} du = \int_0^x \Bigg ( \int_0^{u_n} \Bigg ( \dots \Bigg ( \int_0^{u_1} f(t) dt \Bigg ) \dots \Bigg ) du_n.$$

P(1) was simple enough to prove, but I'm having trouble with the inductive step, P(k) implies P(k + 1). It seems to necessarily involve differentiating something of the form of the original left-hand side. Interestingly, this is possible but long-winded if you expand with the binomial theorem, extract each power of x from the resulting sum of integrals, and apply the fundamental theorem of calculus along with the product rule to differentiate each term of the sum one by one. If I'm not wrong, it is true that
 * $$\frac{d}{dx} \left [ \int_0^x \frac{f(u)(x-u)^n}{n!} du \right ] = \int_0^x \frac{f(u)(x-u)^{n-1}}{(n-1)!} du,$$

which looks deviously simple. My question: is there some easy way of differentiating $$\textstyle \int_0^x g(x,u) du$$ without multivariable calculus, or was my example exceptional (or exceptionally clean) for some reason? My observation is that it is only possible because the integral is reducible to something of the form $$\textstyle \Sigma \int h(x) k(u) du$$; i.e., x and u can be separated sufficiently. This all falls under the general query of how best to pull out the induction; I'd be interested in a more elegant way if one exists. Thanks in advance. — Anonymous Dissident  Talk 12:15, 17 December 2011 (UTC)
 * I think the whole approach is making too much out of the problem. If you apply another integral to the P(k) case you get
 * $$\int_0^t \int_0^x \frac{f(u)(x-u)^n}{n!} du dx= \int_0^t \int_0^x \Bigg ( \int_0^{u_n} \Bigg ( \dots \Bigg ( \int_0^{u_1} f(t) dt \Bigg ) \dots \Bigg ) du_n dx$$
 * and th job is to simplify the lhs. But if you swap the order of integration as in first year calculus the lhs of the P(k+1) pops right out.--RDBury (talk) 16:16, 17 December 2011 (UTC)
 * I've never learned about swapping order of integration, but I suppose that makes sense and simplifies the induction considerably. I'm still interested in the differentiation question though. — Anonymous Dissident  Talk 00:14, 18 December 2011 (UTC)


 * I can't help with your actual question, but you may be interested in Fubini's theorem. Perhaps you already know about it - you just mentioned changing the order of integration, and this covers it. IBE (talk) 05:23, 18 December 2011 (UTC)
 * I'm not sure I do understand this idea about changing the order of integration. I tried to prove my idea of what it means using integration by parts, but failed. Maybe Rdbury could be more explicit about what it means in this context, or how to derive it from integration by parts. — Anonymous Dissident  Talk 11:15, 18 December 2011 (UTC)
 * I don't want to do an exposition on swapping the order of integration when it should be given in any calculus textbook, so I'll just give this link to a public domain source. Quoting from Leibniz integral rule.
 * $$\frac{d}{d\alpha}\int_{a(\alpha)}^{b(\alpha)} f(x,\alpha)\,dx = \frac{d b(\alpha)}{d \alpha}\,f(b(\alpha),\alpha)-\frac{d a(\alpha)}{d \alpha}\,f(a(\alpha),\alpha)+ \int_{a(\alpha)}^{b(\alpha)}\frac{\partial}{\partial \alpha}\,f(x,\alpha)\,dx\,$$
 * which in your case gives
 * $$\frac{d}{dx}\int_0^x \frac{f(u)(x-u)^n}{n!} du=\frac{f(x)(x-x)^n}{n!}-0+\int_0^x \frac{f(u)(x-u)^{n-1}}{(n-1)!} du.$$
 * In other words the formula you gave is valid for integrals from a constant to x when the integrand is 0 at the upper limit of integration.--RDBury (talk) 14:51, 18 December 2011 (UTC)


 * The following may be already clear and obvious for you; anyway: The iterated integral on the RHS, as a function of $$x,$$ is an $$(n+1)$$-th antiderivative $$F(x)$$ of $$f$$, $$F^{(n+1}(x)=0$$ (let's assume $$f$$ continuous so the fundamental theorem of calculus applies smoothly). Two antiderivatives of a function defined on an interval differ by a constant, and for the same reason, two $$(n+1)$$-th antiderivatives differ by a polynomial of degree $$n.$$ This $$F(x)$$ is precisely the $$(n+1)$$-th antiderivative of $$f$$ with $$F(0)=F'(0)=\dots F^{(n)}(0)=0$$, because we always started the integration from 0. In other words, the $$n$$-th Taylor polynomial of $$F$$ is $$0$$ . So $$F(x)$$ coincides with its $$n$$-th remainder $$R_n(x)$$, which is exactly given in integral form on the LH. In conclusion, whether or not you use the Leibnitz integral rule as shown by RDBury, a proof of your identity is essentially the proof of the integral remainder formula itself. For instance: Fix $$x$$, then, as a function of $$u$$, $$F^{(n+1}(u)(x-u)^{n}/n!$$ is exactly the derivative wrto $$u$$ of the $$n$$-th Taylor polynomial of $$F$$ in $$x$$ centred at $$u$$ (by direct computation); then the remainder formula follows immediately by the FTC integrating from $$0$$ to $$x$$. --pm a  09:37, 23 December 2011 (UTC)