Talk:Taylor's theorem/Archive 1

Proof
''Three expressions for R are available. Two are shown below''


 * It is a bit odd to say nothing about the 3rd. - Patrick 00:37 Mar 24, 2003 (UTC)

There ought to be a proof of this theorem.



Note that f(x) = e^(-1/x^2) (with f(0) = 0) is m times continously differentiable everywhere for any m > 0, since derivatives are made up from terms like x^(-n) e^(-1/x^2) all having 0 as limit in 0. So all the derivatives vanish at 0. The Taylor theorem holds, but an infinite expension similar to say e^x does not exist around 0. This could be worth mentioning. (A classic example for non convergence of the Taylor series. The sequence of functions made up from the Taylor expansions without the remainder term does not always converge to the function.)

70.113.52.79 18:04, 14 November 2006 (UTC)

Actually, the entry 'Taylor series' has a complete explanation. A link to that entry should suffice.

Sustik 20:47, 14 November 2006 (UTC)

Removed text
I removed the following text from the article, which was added on Feb 5, 2003 by an anonymous contributor.


 * Proof:


 * Assume that :$$f(x)$$ is a function that can be expressed in terms of a polynomial (it does not have to appear to be one). The n-th derivative of that function will have a constant term as well as other terms.  The "zeroth derivative" of the function (plain old :$$f(x)$$) has what we will call the "zeroth term" (term with the zeroth power of x) as its constant term.  The first derivative will have as a constant the coefficient of the first term times the power of the first term, namely, 1.  The second derivative will have as a constant the coefficient of the second term times the power of the second term times the power of the first term: coefficient * 2 * 1.  The next will be: coefficient * 3 * 2 * 1.  The general pattern is that the n-th derivative's constant term is equal to the n-th term's coefficient times n factorial.  Since a polynomial, and by extension, one of its derivatives, equals its constant term at x=0, we can say:

f^{(n)}(0) = a_x{n!} $$

\frac{f^{(n)}(0)}{n!} = a_x $$


 * So we now have a formula for determining the coefficient of any term for the polynomial version of :$$f(x)$$.  If you put these together, you get a polynomial approximation for the function.

I am not sure what this is supposed to prove, but it appears to be meant as a proof of Taylor's theorem. In that case, it does not seem quite right to me; in particular, the assumption in the first sentence ("f is a function that can be expressed in terms of a polynomial") is rather vague and appears to be just what needs to be proven. Hence, I took the liberty of replacing the above text with a new proof. -- Jitse Niesen 12:50, 20 Feb 2004 (UTC)
 * Jitse, your proof gives no basis to the fact that the remainder term gets smaller as n increases, nor any idea of the interval of convergence of the series. Therefore I will provide a different proof, taken from [i]Complex Variables and Applications[/i] by Ruel V. Churchill of the University of Michigan. It involves complex analysis, however, and if you can provide a basis for the convergence of the series in your proof, I will put it back. Scythe33 22:03, 19 September 2005 (UTC)


 * The theorem, as formulated in the article, does not claim that the remainder term gets smaller as n increases; in fact, this only happens if the function is analytic. The articles on power series talks about the interval of convergence, and holomorphic functions are analytic proves that the series converges.
 * In my opinion, the convergence of a Taylor series is an important point, which could be explained more fully in the article (as the overall organization should be improved), and you're very welcome to do so. Proving that the series converges does not seem necessary for the reasons I gave in the previous paragraph. -- Jitse Niesen (talk) 13:33, 27 September 2005 (UTC)

I appreciate the proof of Taylor's theorem in one variable, it is very good. The explanation is short and clear. Could we site the original source on the main page? ("Complex Variables and Applications" by Ruel V. Churchill) Yoderj 20:17, 8 February 2006 (UTC)


 * The proof was not taken from that book. It is a completely standard proof which will be in many text books. -- Jitse Niesen (talk) 20:48, 8 February 2006 (UTC)

What is &#958;?
In the Lagrange form of the remainder term, is &#958; meant to be any number between a and x or is the theorem supposed to state that there exists such a &#958;? I would guess the latter (because the proof uses the Mean Value Theorem), but the article doesn't make it totally clear. Eric119 15:51, 23 Sep 2004 (UTC)


 * Thanks for picking this up. I rewrote that part to clarify (hopefully). -- Jitse Niesen 16:00, 24 Sep 2004 (UTC)

A few suggestions
I have a few, more stylistic, concerns for the article. I think it should be noted that, to state the Cauchy form of the remainder term, the n+1 derivative of f (the polynomial in the hypothesis of the theorem) must be integrable. I have a similar concern for the proof of the theorem in one variable; if proving the integral version of the theorem, in the inductive step, we must assume that the theorem holds true for a function whose first n derivatives are continuous, and whose n+1 derivative is integrable (this is the "n" case). Then to prove the theorem holds true for n+1, we assume that a new function f has n+1 derivatives -- all of which are continuous -- and that its n+2 derivative is integrable. Since f's n+1 derivative is continuous, it is integrable, and then we may apply the inductive hypothesis to write an expression for f, with the remainder term written as an integral of the n+1 derivative. Then using integration by parts, we can make a substitution to complete the induction. T.Tyrrell 05:13, 3 May 2006 (UTC)

Vector Exponents
What exactly is meant by $$(x-a)^\alpha$$ in the multi-variable case? I didn't think you could take powers of $$R^n$$ vectors. Maybe I just don't understand the notation. At the very least it seems a little ambiguous.


 * This is multi-index notation. I moved the link to this article closer to the formula; hopefully it's clearer now. -- Jitse Niesen (talk) 13:26, 5 October 2006 (UTC)

Lagrange error bound
I noticed that "Lagrange error bound" is redirected here but is not specificly mentioned. I suggest that someone make the relation somewhere. —Preceding unsigned comment added by 132.170.52.13 (talk • contribs)

minor point
small point (and my experience is limited) but in the proof when xf'(x) is expanded I think it is really expanding x(f'(x)) so I think x should stay outside the integral symbol. So instead of what's given (see second term of second line):


 * $$ \begin{align}

f(x) &= f(a)+xf'(x)-af'(a)-\int_a^x \, tf''(t) \, dt \\ &= f(a)+\int_a^x \, xf(t) \,dt+xf'(a)-af'(a)-\int_a^x \, tf(t) \, dt \\ &= f(a)+(x-a)f'(a)+\int_a^x \, (x-t)f''(t) \, dt. \end{align} $$

maybe this would be clearer:
 * $$ \begin{align}

f(x) &= f(a)+xf'(x)-af'(a)-\int_a^x \, tf''(t) \, dt \\ &= f(a)+x\int_a^x \, f(t) \,dt+xf'(a)-af'(a)-\int_a^x \, tf(t) \, dt \\ &= f(a)+(x-a)f'(a)+\int_a^x \, (x-t)f''(t) \, dt. \end{align} $$ Phillipshowardhamilton 18:49, 4 February 2007 (UTC)

About the illustration
There seems to be a slight problem with the position of taylor polynomial and exponential : for x>0, we have exp(x)>P_n(x), and the graphs show the opposite —The preceding unsigned comment was added by 86.201.165.111 (talk) 13:10, 14 April 2007 (UTC).


 * Thanks for pointing out the problem. I asked its author,, to fix it and commented it out until he does so. JRSpriggs 08:28, 15 April 2007 (UTC)
 * Fixed. enochlau (talk) 10:39, 15 April 2007 (UTC)
 * Thank you for your quick response. JRSpriggs 11:36, 15 April 2007 (UTC)

Form of the Remainder
The Cauchy remainder is in the form R_n(x) = f^(n+1)(%xi) over {n!} (x-a)(x-%xi)^n (openoffice) (http://mathworld.wolfram.com/CauchyRemainder.html)

The one that described here as Cauchy is Integral Form of Remainder, I think it needs to be corrected, please check.

71.132.107.150 02:15, 30 April 2007 (UTC) Andrey


 * This article calls the integral form of the remainder "Cauchy" and the intermediate point form "Lagrange", while MathWorld has it the other way around. I checked two reference books, but neither book gives names to the forms. So I have not changed this article, yet. JRSpriggs 10:03, 30 April 2007 (UTC)


 * The Lagrange form is correct. Source: Klein, Morris (1998) Calculus, Dover p. 639.  (A much more definitive source than mathworld, in my opinion.)
 * The particular form of R given in (16) was derived in 1797 by Joseph-Louis Lagrange...
 * He doesn't use what we are calling the Cauchy form, but I'll check Apostol's text later for confirmation. Silly rabbit 13:25, 30 April 2007 (UTC)


 * Apostol agrees with Wikipedia as far as the Lagrange form of the remainder (Tom Apostol, Calculus Vol 1, p. 283.)  However, his version of the Cauchy form is (p. 284):
 * $$R_n(x) = \frac{f^{(n+1)}(\xi)}{n!}(x-\xi)^n(x-a)$$
 * where &xi; is some number between a and x (see article). So Andrey would appear to be correct on this point.  That said, the integral form given is, in a certain sense, more primitive than either of the others since each follows by the mean value theorem for Riemann-Stieltjes integrals (which is not precisely the way Apostol does it, but it is equivalent).
 * I propose that we call the integral form of the remainder the integral form of the remainder. The Cauchy form, as Andrey has correctly pointed out, shall be called the Cauchy form.  The Lagrange form will stay as it is.  Silly rabbit 22:57, 30 April 2007 (UTC)

Taylor's theorem approximation
I suggest the following formula approximation also be incorporated into the article;


 * $$p_n(x) = f(a)+\frac{f'(a)}{1!}(x-a)^1+\frac{f''(a)}{2!}(x-a)^2+\cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n$$

where

$$ f(x) = f(a) + \frac{f'(a)}{1!}(x - a) + \frac{f^{(2)}(a)}{2!}(x - a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x - a)^n + R_n $$

and

$$ R_n = f(x) - p_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x-a)^{n+1} $$

--Zven 00:30, 14 July 2007 (UTC)


 * It is already there except for the label $$p_n(x) \!$$ which would add nothing. JRSpriggs 03:18, 14 July 2007 (UTC)
 * It is more about clarity and the distinction between the taylor series approximation $$ p_n(x) $$ and $$f(x) = p_n(x) + R_n((x)$$ which includes the higher order error terms. The article starts off by stating it as an approximation, and gives an example for the approximation of ex to the nth term. Looking at other internet sources (and three textbooks)
 * http://www.iwu.edu/~lstout/series/node3.html
 * http://www.maths.abdn.ac.uk/~igc/tch/ma2001/notes/node46.html
 * wolfram &larr; a poor definition although it does state Taylor's theorem (without the remainder term) was devised by Taylor in 1712 and published in 1715
 * http://planetmath.org/encyclopedia/TaylorsTheorem.html
 * http://www.math.hmc.edu/calculus/tutorials/taylors_thm/


 * there is certainly the two ways of specifying it. I suggest that the approximation formula theorum goes into the first sentence, then the article can go into details about the error term $$R_n$$. --Zven 08:26, 14 July 2007 (UTC)


 * Well, I am sorry to have to disagree with you. The article is clear as it stands, but your proposed language would make it unclear. You make it sound like f is being defined by the formula. It is not. JRSpriggs 03:46, 15 July 2007 (UTC)


 * I just think that historically taylors theoum is an approximation defined for example def 3.1 in http://www.iwu.edu/~lstout/series/node3.html for n+1 continuous derivatives. I was just suggesting that be included in the article somewhere --Zven 10:56, 15 July 2007 (UTC)


 * Umm... have you read the Wikipedia article on Taylor's theorem? Silly rabbit 11:13, 15 July 2007 (UTC)
 * They are just Just typos. I just thought $$p_n(x)$$ notation was common and could be applicable in this article. --Zven 19:20, 15 July 2007 (UTC)
 * Actually, I think it might do some good. I notice that Taylor polynomial currently redirects here, and it is certainly worth talking about Taylor polynomials in more detail (even separately from their specific relationship to Taylor's theorem).  I'm not sure how to handle this.  Right now, I like the carefully formal approach of each version of the theorem.  The article might, however, benefit from an Introduction organized around the imprecise notion that "a function can be approximated by a Taylor polynomial on small enough intervals."  The formal details and statement can remain as they are, but for the casual reader at least there will be some meaningful (though not very useful) statements.  I need some further input though. Silly rabbit 19:55, 15 July 2007 (UTC)
 * I realise I didnt come across well, but that is exactly what I mean, there is no section on Taylor polynomials --Zven 07:51, 17 July 2007 (UTC)
 * I think there is definitely some value in the idea. However, it may be better to put it in the article on Taylor series. After all, the Taylor polynomials are partial sums of the series; furthermore, the article on Taylor series is more low level. -- Jitse Niesen (talk) 08:07, 23 July 2007 (UTC)
 * I have already been bounced from there after suggesting it on the talk page. It certainly is applicable to one of the two articles --Zven 01:51, 24 July 2007 (UTC)

(de-indenting) Sorry, I didn't know that. Looking at the two articles Taylor series and Taylor's theorem together, I still think that the Taylor polynomials fit better in the former article, for instance in the "Definition" section. The redirect Taylor polynomial should then be changed to point to Taylor series. Any objections? -- Jitse Niesen (talk) 12:27, 24 July 2007 (UTC)

Comments
To my mind this article suffers from a serious stylistic flaw: it is supposed to be about a theorem ("Taylor's Theorem"), but nowhere in the article is there a theorem stated: i.e., a crisp mathematical statement which clearly enunciates that certain hypotheses entail a certain conclusion. (This occurs despite the fact that all the content is here.) As a result, I am already confused by the first two introductory sentences of the article: exactly what is the statement that Taylor stated in 1712 but was anticipated by Gregory? Is it:

(i) For every n, there is a unique polynomial of degree at most n which matches the value of f and its first n derivatives at a, namely [the Taylor polynomial]?

(ii) As above, plus: the difference between a function and its Taylor approximation is O(x^(n+1)) as x -> a, provided f is n+1 times differentiable at a?

(iii) As above, plus: the remainder R_{n+1}(x) is of the form [one of the forms discussed]?

I am aware that all of these statements are sometimes loosely called "Taylor's Theorem" by various people. But an encylopedia article needs to be more precise, particularly when discussing the history of what was proved.

To me it would seem preferable if Taylor's theorem were said to be (iii) with the Lagrange form of the remainder. (E.g. Planetmath does this.) The alternate forms of the remainder should still appear as variants (ideally with some commentary on why there is more than one form and what the history is concerning this...)

In the same vein, I don't like the way the discussion begins with "A simple example of Taylor's theorem is..." An example of a theorem should be a particular case of the theorem, but one cannot tell from the discussion what the theorem is supposed to be: exactly what statement is meant by the squiggly equality of e^x and its Taylor polynomial? Plclark (talk) 10:22, 20 November 2007 (UTC)Plclark

What IS the theorem
Can someone add the statement of the theorem, which, unbelievably, is lacking.


 * It is the statement in the article immediately following the text:
 * "The precise statement of the theorem is as follows:".
 * This is the second sentence of Section 1. --Lambiam 11:54, 5 March 2008 (UTC)


 * I agree with the anonymous (it seems) person that posed the question. There is no statement of any theorem without saying something precise about the remainder function Rn; without it one can just define that function by the given equation. So every statement restricting the remainder leads to a different version of Taylor's theorem. Marc van Leeuwen (talk) 06:44, 15 March 2008 (UTC)

There is another difficiulty I have with the the given statement: it implicitly fixes x by putting it in the bounds of the interval given in its hypothesis; therefore the theorem states only something about one x at a time. While it is possible to do so, it seems to me contrary to the spirit of the theorem. One wishes to give an approximation to the function f on some interval, and show that $$R_n$$ is a function constrained in some way. (This does not exclude fixing x in a proof of the theorem, and of course the statement about $$R_n(x)$$ will refer to the specific value of x.) I would say the satement should go like "Let f be a function defined on some interval&hellip; then for all x in this interval&hellip;". Marc van Leeuwen (talk) 07:09, 15 March 2008 (UTC)

Response and proposal
Back when I put major work into this article, I had wanted to change it as well for much the same reason. However, I think the current version does have an advantage over most statements in that the point a around which one is doing the Taylor expansion need not be an interior point of the domain. Another advantage of the present version is that it emphasizes the role of the error term (rather than the precise domain of definition). This is, I believe, more how Taylor's theorem is invoked in its most common applications to numerical analysis. Ultimately, in my own mind the advantages and disadvantages of the approach in the article balanced, and so I didn't think it worthwhile to change it to something else. Now that there are more minds on the task, allow me to propose the following modification for discussion:

Please include responses here. Cheers, silly rabbit  (  talk  ) 14:37, 17 March 2008 (UTC)


 * Several unrelated issues:
 * We should separate out the "simple example" to a little section on its own.
 * If we choose not to make x an endpoint of the interval, is it perhaps possible also not to make the point around which the series is expanded (a above) necessarily an endpoint of the interval ([a, b])? That would correspond even better to how the theorem is often used. Renaming the interval [u, v], I'd like to see u ≤ a ≤ v. We must avoid straying too much from published versions, though.
 * The boxed formulation does not address the complaint that a precise statement of the theorem requires a precise statement about the remainder term. Perhaps we can do something along the following lines, given here in telegram style:
 * The statement of the theorem involves a remainder term R denoting the difference. There are several versions of the theorem, which differ in the form of R. For a given remainder term R, the statement is as follows: ... . Here is a list of forms for R: ...
 * It should be made clear that for fixed n and fixed interval, ξ in the various forms still also depends on x.
 * --Lambiam 07:59, 18 March 2008 (UTC)

Lack of precision
"Taylor's theorem"

- In calculus, Taylor's theorem gives a sequence of increasingly better approximations of a differentiable function near a given point by polynomials (the Taylor polynomials of that function) whose coefficients depend only on the derivatives of the function at that point. The theorem is named after the mathematician Brook Taylor, who stated it in 1712, even though the result was first discovered 41 years earlier in 1671 by James Gregory.

Actually, this is not necessarly true for real function, even for a smooth function. For some smooth function the higher order approximations can be worse than the lower ones. Lechatjaune (talk) 13:52, 17 March 2008 (UTC)


 * Indeed! I have changed the lead sentence in response to your objection.  I'm not entirely happy with the wording, so please make changes accordingly.  Thanks,  silly rabbit  (  talk  ) 14:38, 17 March 2008 (UTC)

True statement?
The remainder term Rn(x) depends on x and is small if x is close enough to a. Isn't exp(-1/x^2) when a=0 a famous counter example? -- Randomblue


 * No. For that function, all term in the Taylor polynomial are zero, so the remainder term is just exp(-1/x^2) itself. This is a small number when x is close to 0. For instance, if x = 0.1 then exp(-1/x^2) = exp(-100) is very small indeed. -- Jitse Niesen (talk) 10:24, 16 April 2008 (UTC)

analytic is analytic
At the end of Estimates of the remainder it says "This makes precise the idea that analytic functions are those which are equal to their Taylor series." Since analic functions are defined as those which are (locally) equal to their (convergent) Taylor series, this seems to be saying that analytic functions are analytic. Unless I am missing something, this sentence should be removed (and maybe the preceding argument as well). Marc van Leeuwen (talk) 12:24, 10 May 2008 (UTC)


 * The paragraph gives an important condition for the analyticity of an infinitely differentiable function, so my feeling is that it should stay. This condition is mentioned prominently, for instance, in the textbook by Ahlfors.  Of course, to someone with a little experience, it can be deduced almost immediately from facts about convergent power series.  But I think it should stay because it gives a condition for analyticity purely in terms of estimates of the derivative, which arises in a rather nice way out of Taylor's theorem.  I am lukewarm about the sentence you are balking at, and always have been.  I think we should solicit suggestions on how to change it, but I feel that the paragraph is naked without some kind of concluding remark to tie it off.   silly rabbit  (  talk  ) 13:38, 10 May 2008 (UTC)


 * Hmm… you made me actually think about what the preceding text is trying to say, and although I am not at all specialised in analysis I found it necessary to add some text to have it make proper sense; I hope this was the sense intended. Still I'm convinced that the conclusion (starting "In other words…") is not the conclusion to what precedes, but rather something opposite and fairly trivial. Infomally, for me the preceding text says, if one forces all those thumbscrews (uniform bounds on the same interval on all derivatives with constants that cooperate nicely) onto our unwilling C-&infin; function (since such functions are hardly ever analytic), then it has to give up resistence and concede to being equal to (the limit of) its Taylor series. But the conclusion given only says that analytic functions do satisfy those restrictions, which is hardly surprising because they are already given by a convergent series to begin with. Marc van Leeuwen (talk) 12:26, 11 May 2008 (UTC)


 * I have deleted the last sentence of the paragraph. I think I know why it was added in the first place, because the previous mention of analytic functions was somewhat imprecise and unsatisfactory.  I will try to correct this problem as well.  silly rabbit  (  talk  ) 12:36, 11 May 2008 (UTC)


 * I've undone the first of your sequence of edits (more or less). No hurt intended, but in the previous part there was only one n for which the existence of a constant was required; just being infinitely differentialble does not imply those constants also exist, so the sentence should contain such a turn. Marc van Leeuwen (talk) 18:43, 13 May 2008 (UTC)


 * Yes, being infinitely differentiable on a compact set does imply the existence of these constants. But now that I am mindful of your concern, I will try to restate this is a somewhat less awkward way.  silly rabbit  (  talk  ) 19:54, 13 May 2008 (UTC)