Talk:Differentiation under the integral sign

Pretty sure second step of example 2 is wrong. —Preceding unsigned comment added by 129.110.241.33 (talk) 23:00, 9 April 2010 (UTC)

Article merged: See old talk-page here

Beginning Quote
Is there really a point to having the Feynman quote at the top of the page? If this was not a reference work, but instead a more literary medium (like a math book), it would be acceptable, but as it stands, I believe it is horribly out of place.


 * Not only that, but if it's the same quotation still in the article I don't think it belongs at all anywhere in the article, except perhaps as a footnote. For example:
 * This technique is not widely taught, and there is a long-winded anecdote that can be told that kind of relates[1].
 * —DIV (138.194.11.244 (talk) 07:50, 6 January 2012 (UTC))

Some other concern
Consider the function defined by $$ f(x,t) = 0$$ if $$ t = 0$$ and by
 * $$ f(x,t) = \frac{1}{\sqrt{t}} sin \frac{x}{\sqrt{t}} $$

if $$ 0< t \le 1$$. This is integrable with respect to $$ t$$ for $$ t \in [0,1]$$ so the function
 * $$ F(x) = \int_0^1 f(x,t) dt $$

is at least defined. But $$ \frac{\partial f}{\partial x}(x,t) = \frac{1}{t} cos \frac{x}{\sqrt{t}} $$ isn't integrable, so one of the integrals appearing in the result is undefined. And in fact F is not differentiable.

So at the least we must add the hypothesis that $$F$$ be differentiable. But I doubt if this will be enough to make the result true. If we allow t to vary over an infinite inteval then there is the following counter example. Let
 * $$ f(x,t) = x^3 \exp (-x^2t) $$

for all $$ x$$ and for $$ t \in [0,\infty)$$. Then
 * $$ F(x) = \int_0^\infty f(x,t) dt = x$$,

so $$ F$$ is differentiable, with $$ F'(0) = 1$$. But
 * $$ \int_0^\infty \frac{\partial f}{\partial x}(0,t)  dt = \int_0^\infty 0 dt = 0 $$

so the derivative of F at 0 is not given by differentiating under the integral sign.

88.105.188.214 05:23, 1 December 2006 (UTC)

Does everything still hold if x is a function of t?
This article treats as if x and t are two independent variables, and together x and t define a plane. Does everything still hold if x is a function of t? —Preceding unsigned comment added by 69.134.241.125 (talk) 09:56, 8 April 2010 (UTC)

No. The fact that t and x are independent is essential. - July 4, 2012 — Preceding unsigned comment added by 65.254.110.193 (talk) 02:17, 5 July 2012 (UTC)

need some conditions
I agree with the previous anonymous comment that we need some conditions, other than mere differentiability. Here's a really, really simple example: let $$f \equiv 1$$, then $$ \frac{d}{dt} \int_{-\infty}^{\infty} f dx = \frac{d}{dt} \infty = ?? $$

while

$$\int_{-\infty}^{\infty}\frac{d}{dt}f dx = \int_{-\infty}^{\infty} 0 \,dx = 0 $$

so we need the integral to converge. Anyone know the exact necessary conditions? --Lavaka 20:25, 9 May 2007 (UTC)


 * http://planetmath.org/encyclopedia/DifferentiationUnderIntegralSign.html lists several versions of the theorem including those where the compact interval $[a(x),b(x)]$ is replaced by an measure space $\Omega $ as $(-\infty, \infty ) $ is one. However atleast Theorem 2 and Theorem 3 have as condition in your example that \int_{-\infty}^{\infty} f dx converges, so I don't know whether you are happy with these theorems.
 * Or one could consider a and b as further parameters. In that case one has to decide whether \int_a^bf dx converges to \int_{-\infty}^\infty - separately in a and b within in C^1 locally around t, hasn't one? -- JanCK (talk) 14:54, 18 March 2008 (UTC)

mean value theorem error?
I think this might be an error. Shouldn't this text: actually be: ssepp(talk) 14:10, 1 October 2007 (UTC)
 * "A form of the mean value theorem, $$\int_a^bf(x)dx=(b-a)f'(\xi)\,$$, where $$a<\xi<b\,$$,"
 * "A form of the mean value theorem, $$\int_a^bf(x)dx=(b-a)f(\xi)\,$$, where $$a<\xi<b\,$$," ?
 * I made the change. ssepp(talk) 18:31, 4 October 2007 (UTC)

Simple example earlier in the article
I would suggest that the example of integration with limits not depending on 'x'(for fixed real numbers) be treated in the beginning. The present theorem is too general I believe for the beginning section. Perhaps the article can describe the different formulations in increasing generality? Ulner 20:28, 10 November 2007 (UTC)

Lightning Bolt or Ninjas Missing for the Intregal Step in Example 1 ?
I have a degree in Physics, minor in math and I'm trying to learn this technique. I'm trying to do Example 1, and I follow it up to the integral sign where it suddenly integrates it in 1 step. I'm not seeing how that step follows, sorry maybe I'm just slow, but I really don't get how that step actually worked. Did the person use a table? Mathematica? Is there some technique being used that I'm not seeing? Is there a lightning bolt or a ninja sneaking in and doing the work to finish the integral? —Preceding unsigned comment added by 24.117.47.160 (talk) 22:43, 26 December 2007 (UTC)

If you are talking about what I think you are talking about, you are stuck on the part where the differentiation and integration takes place on basically one line. I can confirm indeed that the derivative of

$$ \frac{\alpha}{x^2+\alpha^2} $$

is

$$ \frac{x^2 - \alpha^2}{(x^2 + \alpha^2)^2}. $$

The differentiation can be done with the usual product rule and some simplification. The integral of it can be done by the method of partial fractions, namely that $$ \frac{x^2 - \alpha^2}{(x^2 + \alpha^2)^2} = \frac{1}{\alpha^2 + x^2} - \frac{2\alpha^2}{(x^2+\alpha^2)^2}, $$ which can be integrated with the substitution $$u = x/\alpha $$. Then make the additional substitution of an arctangent function... but actually now that I do the integration, I have run into the same problem! I have no idea how the integration is done in one step! When I do it, I get several arctangent functions and the polynomial (albeit with an absolute value in the denominator...) that the article states. Perhaps someone else can shed light on this subject. Dchristle (talk) 23:46, 22 February 2008 (UTC)

Trig substitution, x = a tan(theta), so you get 1/a integral of sin^2(theta) - cos^2(theta) d(theta) then sin^2(theta) = 1-cos^2(theta), and when you change back to x the answer pops out. 77.102.173.23 (talk) 13:42, 8 November 2008 (UTC)

Lead Definition
I am not familiar with the subject, but I was trying to make a careful read of the definition and I don't understand the constant use of $$x_0\leq x\leq x_1$$. I suspect that is supposed to be $$a(x)\leq x \leq b(x)$$? Is that right? 66.216.172.3 (talk) 15:22, 25 March 2008 (UTC)

Derivation - integral parameters
I just edited this section to include the dx after the integral signs in the formulae. I'm pretty sure these are correct and that the statements were meaningless before - but just wanted to check. —Preceding unsigned comment added by A good brew (talk • contribs) 09:44, 21 July 2008 (UTC)

Examples don't use the full power of the technique.
I might not know what I'm talking about here, but it occurred to me that in none of the examples are the limits on the integrals non-constant functions. Does someone want to add an example where you have non-trivial a(x) and b(x) as the integration limits? Or does this rarely come up in practice? Ceresly (talk) 23:00, 5 September 2009 (UTC)

I just added Example 6 which is finally using the variable limits (but I used a simple case where the function f(x,t) is independent of x, so it disappears when the differential goes in.

Also, a math friend keeps complaining that the derivation is simply not rigourous. The use of deltas without epsilon, in the early proof of the differential of variable limits is not rigourous, as is the continual changing between dt and dx. I intend to edit it out and set it straight some time soon, but I need to make sure my version is rigourous in itself, and I am stuck at the differential of variable limits (chicken and egg again). —Preceding unsigned comment added by 144.82.194.2 (talk) 02:36, 25 March 2010 (UTC)

First equation not as explicit as could be
The first equation $$F(x)=...$$ could be changed to $$F(x,a(x),b(x))=...$$ or just $$F(x,a,b)=...$$ to make the dependence on a and b more explicit in my opinion. Do you agree/disagree?

Great article, but lede needs definition
Great article, but the lede does not actually define what the part of the initial definition is differentiation under the integral sign (or how it gets its name); it is mostly obvious from context, but if one reads closely, the term is just defined as a useful technique; it could use a final recap at the end of the paragraph saying "This set of steps [? or perhaps better: the technique of expanding out partial derivatives with respect to b and to a] is called Differentiation under the integral sign". Thanks! -- Michael Scott Cuthbert (talk) 16:11, 8 July 2010 (UTC)

Worried about the rigour of this article
Firstly, nowhere (that I could find) in this article are the restrictions on whether you can or whether you can't integrate under the integral sign. The article does talk about conditions under which you can differentiate under the integral sign, but I don't see those conditions written down. As far as I know there are very specific conditions on whether you are allowed to integrate under the integral sign and they must all be verified before you can do it (cf. Section 8.2 in Lang (1993) "Real and functional analysis," Graduate Texts in Mathematics, volume 142, Third Edition, Springer-Verlag, New York), namely the function must be measurable, the partial derivative must exist and also there must exist an L^1 integrable majorant, which in general can be hard to find. HowiAuckland (talk) 02:01, 6 September 2010 (UTC)

Problems with the Problems
For fun, I solved all the extra problems at the end of this article, except one.

I used the substitution suggested


 * $$\int_0^\infty\;e^{-\left(x^2+\frac{\alpha^2}{x^2}\right)}\;dx,\,$$

And it did not simplify the integral. In fact, with a substitution after this step, the integral can be returned to its original form, as though nothing had been changed.

I'd be interested to know what I'm missing. Otherwise, this problem should be removed from the list.

Paul B Rimmer (talk) 18:46, 10 November 2010 (UTC)

I, too, used the suggested substitution, and also got stuck. The problem should be removed (or perhaps corrected? a hint added?). -7/4/2012

I also found a problem with Example 3. I cannot go from
 * $$\frac{-2\alpha\cos(x)+2\alpha}{1-2\alpha\cos(x)+\alpha^2}$$

to
 * $$\frac{1}{\alpha}(1-\frac{(1-\alpha)^2}{1-2\alpha\cos(x)+\alpha^2})$$

I am getting the following formula instead:
 * $$\frac{1}{\alpha}(1+\frac{(1-\alpha^2)}{1-2\alpha\cos(x)+\alpha^2})$$69.172.84.139 (talk) 06:59, 9 December 2013 (UTC)

Stop misusing the chain rule
I am correcting the sign on the $$\frac{\partial F}{\partial a}$$ term of the formula near the beginning. It should be a plus, due to the chain rule. The final result picks up a minus sign on calculating the actual value of this derivative (note the "Derivation of the principle of blah blah..." section). — Preceding unsigned comment added by 84.38.9.75 (talk) 04:43, 14 July 2011 (UTC)


 * I disagree. The minus sign comes from evaluating F at the lower limit of the integral.  I have changed it back. Garethvaughan (talk) 11:11, 16 July 2011 (UTC)


 * Right. I've thought about it and I see I'm wrong, but I think there is some confusion regarding notation.  As it is currently defined, F(x) is a function solely of x: not a nor b.   And, moreover, the current definition is contrary to the common notation as the anti-derivative (a convention that IS also used in this article - see the derivation section).  I think clarifying this confusion would complicate things, and anyway, what the equation in question provides is already adequately provided in the derivation section.  So I've removed the first line of the equation.Garethvaughan (talk) 02:24, 17 July 2011 (UTC)

On Higher Dimensions
The Reynolds theorem given here is correct for a material volume, which has Lagrangian boundaries. It is not correct for a volume with arbitrary boundary velocity. What Reynolds actually wrote had the boundary velocity in the surface integral. See the talk page on Reynolds transport theorem. Garethvaughan (talk) 02:32, 17 July 2011 (UTC)

Inconsistent notation for the total derivative
The prime (Lagrange's) notation, i.e. b'(x) etc. is inconsistent with the notation used in the rest of the article, as is the dot (Newton's) notation signifying the time derivative. At least in the case of using the dot notation its use was given as defined in the discussion.

I would suggest updating the page to change the notation to make it more in line with the rest of the article, or at the very least add a "Where b'(x) is the total derivative of b with respect to x." to the definition.

I know use of the prime notation is standard, but as a self contained article, such additions are a nicety.

http://en.wikipedia.org/wiki/Derivative

--Tardyon (talk) 16:57, 24 July 2011 (UTC)

Merge with Leibniz integral rule?
There appears to be a lot of duplication with that article.Dfeuer (talk) 18:18, 7 October 2012 (UTC)


 * Totally agree. Both pages discuss the same exact topic. — Preceding unsigned comment added by 99.241.86.114 (talk) 13:26, 1 January 2013 (UTC)


 * I agree. The only theorem discussed in this is the a generalised version of the Leibniz integral rule, which doesn't contain variable limits, but the article proves the theorem with variable limits. thedoctar (talk) 07:16, 21 May 2013 (UTC)


 * As far I am concerned, whoever has the initiative should go ahead and do it. We seem to have consensus here.KlappCK (talk) 21:44, 14 March 2014 (UTC)

Incomplete main result
The main result (theorem) at the beginning is incomplete. — Preceding unsigned comment added by Prof. Globi (talk • contribs) 13:53, 26 December 2014 (UTC)