Talk:Runge's phenomenon

Missing stuff
(William M. Connolley 11:04, 20 Sep 2004 (UTC)) This article is at least missing some stuff, which I can try to fill in if no-one else wants to.

Its says "if we interpolate the polynomial at x_i..." but doesn't say *how* we interp the polynomial. Is it assuming that we use the taylor series expansion around x=0?

It would certainly be possible, given whatever choice of nodes, to chose a polynomial that minimises L_infinity error.


 * The article says exactly how to interpolate the function and I doubt it could be made more explicit. We use polynomial interpolation with n equidistant points. And yes we can minimize the error as explained in the ==Solution== MathMartin 11:45, 20 Sep 2004 (UTC)


 * (William M. Connolley 14:08, 20 Sep 2004 (UTC)) Ah... yes... OK. I see whats going on now. I find the language confusing though... maybe thats just me. In that case, I suggest that the "solution" section ought to mention that this isn't an intrinsic function of approx by polys, just this particular method of fit, which (as the article shows) is a poor one.


 * And the article isn't really very explicit about how the interp is done, either.

Perhaps the article could be a bit clearer, but I doubt much improvement can be made. What exactly did you not unterstand about the article ? Do you know or have you read polynomial interpolation ? What do you mean by intrinsic function of approx by polys and particular method of fit, poor one ?MathMartin 14:17, 20 Sep 2004 (UTC)


 * (William M. Connolley 14:29, 20 Sep 2004 (UTC)) I mean that the page gives the impression that it is intrinsically necessary that a polynomial fit to a function always has large errors that get worse with higher degree polys. Whereas this is only true if you restrict yourself to poly interp at given fixed nodes.

The impression the page gives is correct. When you do polynomial interpolation you always get "large errors" at the edge of the interpolation interval. This error can be minimized by choosing your interpolation nodes carefully (the Chebyshev nodes) but cannot be prevented. If you interpolate a function on the complete real line, even when using Chebyshev nodes for interpolation, the error will go towards infinity when increasing the degree of your interpolation polynomial. To put it differently polynomial interpolation is inherently flawed. For this reason nowadays spline interpolation is preferred which uses low degree polynomials.


 * Polynomial interpolation process does converge for certain classes of functions. Though it is true that for every predefined table of interpolation nodes there is a continuous function for which the interpolation process on those nodes diverges (in the uniform norm on [0,1]), there is also true that for every continuous function there is a table of nodes on which the interpolation process converges. The former fact's proof is based on the estimation of Lebesgue constant and is quite cumbersome (at least as given in my book), the latter one's is easier and is based on the fact that the set of polynomials is dense in C[0,1].

My book also gives a theorem that Chebyshev interpolation (ie on Chebyshev nodes) converges in C[0,1] for every absolutely continuous function. So when I looked for a continous function for which the interpolation process diverges, there was an easy answer for equidistant nodes (for example, the one from Runge's phenomenon), but for Chebyshev nodes the only continuous but not absolute continuous function I know of is Cantor function, and I am still unable of proving or disproving convergence of Chebyshev interpolation for it. Do you think all those facts are worth mentioning in the article?


 * (William M. Connolley 18:57, 9 Oct 2004 (UTC)) There are things I'm unclear about, so I think its best left to your judgement (I assume that the above was by MM unsigned-in). Its probably fair to point out that using C nodes makes it converge for all normal/non-pathological/the-things-people-think-are-functions (not sure how this is best expressed for the general reader) and then add in the detail you've found for the mroe knowledgeable.

As for convergence on the complete real line, it is still possible at least in the class of infinitely times differentiable functions with uniformly bounded derivatives, when interpolation nodes are arbitrarily chosen from [0,1]. --Igor 18:31, Oct 9, 2004 (UTC)


 * Could put that in too...

Magnitude of Derivatives
It is quite simple to obtain all derivatives of f(x) = 1/(1+25x^2) in x=0. Using the geometric series we immediately obtain a power series. Hence, by the identity theorem of power series we also have its taylor series and therefore its derivatives. Here the details:

For $$|x|<1/5$$ the geometric series gives us (substitute $$q=-25x^2$$)

f(x) = \frac{1}{1+25x^2} = \sum\limits_{k=0}^{\infty} (-1)^k 25^k x^{2k} $$ Because of the identity theorem of power series this has to be identical to the taylor series

f(x) = \sum\limits_{k=0}^{\infty} \frac{f^{(k)}(x_0)}{k!}(x-x_0)^k $$ So we obtain

f^{(2k+1)}(0) = 0, \quad k=0,1,2,\dots $$ and

\frac{f^{(2k)}(0)}{(2k)!} = (-1)^k 25^k \; \Leftrightarrow \; f^{(2k)}(0) = (-1)^k 25^k (2k)! \quad k=0,1,2,\dots $$

So we observe for $$k=0,1,2,\dots$$that the even derivatives in $$0$$ are

f^{\,(2k)}(0) = (-1)^k 5^{2k} (2k)! $$ such that the error bound gets arbitrary large. Maybe one also should note that a large error bound is no sufficient condition for a large error. Mlehn (talk) 11:39, 25 August 2010 (UTC)

Wrong statement?
However, this particular set of polynomial functions Pn(x) is not guaranteed to have the property of uniform convergence; the theorem only states that a set of polynomial functions exists, without providing a general method of finding one.

This last statement seems wrong, given that there exist constructive proofs to the theorem, as in with the Bernstein polynomials: http://en.wikipedia.org/wiki/Bernstein_polynomial

--Pliskin14 (talk) 12:17, 18 February 2015 (UTC)

it's not wrong. It only refers to the theorem of stone-weierstrass, which indeed does not provide a method of construction unlike for example the bernstein method. — Preceding unsigned comment added by 130.75.31.228 (talk) 14:12, 19 February 2019 (UTC)

Typo?
The equidistance between points leads to a Lebesgue constant qui increases quickly when n increases.

'qui'??? Do you mean 'that'? — Preceding unsigned comment added by 128.250.118.80 (talk) 04:00, 24 September 2015 (UTC)