Talk:Euler–Maclaurin formula

Using a Different Interval
The current article has the integral evaluated in steps of dx = 1. Which is fine but you don't have to evaluate it just that way. You can use smaller step sizes. In which case, the S would have to be multiplied by h. So it might look something like this:

$$S=\frac{h}{2}f(0)+hf\left( h\right) +\cdots+hf\left( (n-1)h\right) +\frac{h}{2}f(nh)$$

This then affects the correction factor by multiplying a factor:

$$h^k$$

This factor may be obvious to some but I don't think it would be for everyone. So, if someone tried using the integral as it was but changed the step size, they would get a very wrong answer. I don't know what the standard is on Wikipedia but it might be worth rewriting the equation in these terms? —Preceding unsigned comment added by 159.242.227.76 (talk) 21:58, 28 September 2008 (UTC)

Motivation for the existence
I don't understand why $$\Delta = e^D - I$$, or even what I is (the integral used earlier in the article?). Can someone explain? Also, is there a reference for this argument (other than "Legendre")? Fredrik Johansson 20:05, 17 May 2006 (UTC)


 * No, I is not that integral. I is the identity operator on functions, i.e.


 * $$(If)(x) = f(x);\,$$


 * D is the differentiation operator, i.e.


 * $$(Df)(x) = f'(x);\,$$


 * and &Delta; is the forward difference operator, i.e.


 * $$ (\Delta f)(x) = f(x+1) - f(x).\, $$


 * Using the power series


 * $$e^x = 1 + x + {x^2 \over 2} + {x^3 \over 6} + {x^4 \over 24} + \cdots,$$


 * we have


 * $$e^D = I + D + {D^2 \over 2} + {D^3 \over 6} + \cdots.$$


 * Therefore


 * $$(e^D - I)f(x) = f'(x) + {f(x) \over 2} + {f'(x) \over 6} + \cdots.$$


 * If f happens to be a polynomial function, then all but finitely many of these terms vanish, and a bit of algebra shows that this sum adds up to


 * $$f(x+1) - f(x),\,$$


 * i.e. it adds up to


 * $$\Delta f(x).\,$$


 * To what extent this works when f is not a polynomial, is a more difficult question. Michael Hardy 22:58, 17 May 2006 (UTC)


 * Thanks, Michael. Fredrik Johansson 23:02, 17 May 2006 (UTC)

---I should add that the Differentiation operator acting on that space of polynomials is just an emulation of the differentiation operation but uses no calculus since is just defined as ::$$D[a_0x^n + a_1x^{n-1} + a_2x^{n-2}+...+a_{n-1}x + a_n] = n*a_0x^{n-1}+(n-1)*a_1x^{n-2}+...+2*a_{n-2}x+a_{n-1}$$ and isn't necessarily defined for an arbitrary function.

Some Formula
The article doesn't say exactly how to find the following Sn, here is the process. The idea is from Trapezium rule

Sn=Sum i^2 = n*(n+1)*(2n+1)/6
 * from b^3-a^3=(b-a)*(b^2+ab+a^2) b=i, a=i-1,
 * i^3-(i-1)^3=(i^2+i*(i-1)+(i-1)^2)=2*i^2-i+(i-1)^2
 * Sum (i^3-(i-1)^3)=n^3=3Sn-n^2-n*(n+1)/2, you find Sn

Sn=Sum i^3 = n^2*(n+1)^2/4
 * from b^4-a^4=(b^2-a^2)(b^2+a^2)=(b-a)(b+a)(b^2+a^2)
 * b=i, a=i-1, i^4-(i-1)^4=4*i^3-6*i^2+4^i-1
 * Sum (i^4-(i-1)^4)=n^4=4Sn-n*(n+1)*(2n+1)+2n*(n+1)-n, you find Sn

With the same method, we can find all others. Please help put the text in a nice format.


 * RESPONSE
 * I'd be happy to help put that in a nice format, but it's a bit unclear. What are the sums ranging from, and what does "from" refer to?  If you want to do it yourself, use the simple commands like

$$ and $$ \sum_{0}^{\infty} would give sum from 0 to infinity
 * Lavaka 19:34, 6 September 2006 (UTC)

unclear variable
In the "Remainder Term" section, where did the variable $$N$$ come from? I can only assume it's supposed to be $$p$$? Lavaka 19:34, 6 September 2006 (UTC)

Remainder term
The 'p+1' st derivative inside the integral looks suspicious to me - is it correct? Crackling 13:46, 25 March 2007 (UTC)

Definition of Bernoulli numbers
These are defined here (at least twice) as Bn(1), which is inconsistent with some other Wikipedia pages. They are also defined, in this article, as Bn(0). Some clarification would be helpful. Crackling 13:46, 25 March 2007 (UTC)

Periodic Bernoulli functions Pn(x)
These functions are not defined for integer x, but the evaluation of the integration by parts requires this. Explicitly it seems to assume that P1(k) is 1/2, but B1(0) is -1/2, so I am rather confused. Extending the definition to integral x would be helpful. Crackling 13:46, 25 March 2007 (UTC)


 * Use one-sided limits. Michael Hardy 18:30, 23 September 2007 (UTC)

Invalid proof???
I don't see how the proof on this page is correct. Perhaps I am missing something, but f(x)P1(x) evaluated from k to k+1 does not appear to equal (f(k)+f(k+1))/2 Instead, it appears to equal (-f(k+1)+f(k))/2. This invalidates the proof. Can someone show me that I am wrong by adding the intermediate steps?

Thanks. —Preceding unsigned comment added by Lasher9999 (talk • contribs)


 * As x approaches k FROM ABOVE, P1(x) approaches &minus;1/2. As x approaches k + 1 FROM BELOW, P1(x) approaches +1/2.  Thus we have
 * $$\Bigg[f(x)P_1(x)\Bigg]_k^{k+1}
 * $$\Bigg[f(x)P_1(x)\Bigg]_k^{k+1}

= f(k+1)P_1((k+1)-) - f(k)P_1(k+) \,$$


 * $$ = f(k+1)\frac12 - f(k)\left(\frac{-1}{2}\right) = \frac{f(k+1) + f(k)}{2}. \,$$
 * Michael Hardy 18:27, 23 September 2007 (UTC)
 * Michael Hardy 18:27, 23 September 2007 (UTC)

Integratable???
I assume from the discussion that the integral of P_1(x) times f(x) cannot be found exactly but only approximated by a never ending series of higher orders of bernoulli polynomials via integration by parts, but I am not sure why. P_1(x) is defined at the end points and has a very simple form over the interval of k to k+1. It seems like it should be integrable. Thanks.
 * User:Lasher9999 —Preceding signed but undated comment was added at 00:38, 2 October 2007 (UTC)

Rewriting the Summation
For the difference between the integral, I, and the summation, S, the formula sums over all values greater than 2. However, each sum is being multiplied by a Bernoulli number which is zero for every odd value greater than 2. This means that all the odd terms of this summation don't get counted. Since the Euler-Maclaurin formula is primarily used in computation, wouldn't it be better to rewrite this without the even values? Also, it makes more sense to separate the S and I. So, right after it says what S - I is, it might be worth mentioning that it can be rewritten this more useful way:

$$I=S+\sum_{k=1}^\infty\ \frac{h^{2k}B_{2k}}{(2k)!}\left(f^{(2k-1)}(0)-f^{(2k-1)}(nh)\right)$$

with this definition of S:

$$S=\frac{h}{2}f(0)+hf\left( h\right) +\cdots+hf\left( (n-1)h\right) +\frac{h}{2}f(nh)$$

-159.242.226.146 (talk) 05:59, 5 October 2008 (UTC)


 * Let me comment with the words of Graham, Knuth and Patashnik (Concrete Mathematics, page 24, [..] inserted):


 * "Somehow it seems more efficient to add up [ the B_2k ] terms instead of [ the B_k ] terms. But such temptations should be resisted; efficiency of computation is not the same as efficiency of understanding." —Preceding unsigned comment added by 212.6.180.48 (talk) 15:33, 21 December 2008 (UTC)


 * Thanks for faking a quotation. In fact G,K&P themselves leave out the identically zero terms when they discuss the remainder term (pp 460ff).  Anon is correct that we should state it like that, but we can also do it both ways. McKay (talk) 09:50, 9 March 2009 (UTC)

Error Term exponent error?
Computations with the Basel problem show that the error term upper bound given in the final line of the main article's discussion of the remainder term is incorrect.

The claimed theoretical upper bound for the error undershoots the actual error by a factor that grows like (2 Pi)^p. Using Mathematica and LaTeX syntax, we want to estimate Sum[k^(-2),{k,n,Infinity}]. ($\sum_{k=n}^{\infty}k^{-2}$). The exact value is Pi^2/6-Sum[k^(-2),{k,1,n-1}]. What follows is executable Mathematica code, in case the reader wants to check the claims made here.

exactbasel[n_,p_]:=Pi^2/6-Sum[k^(-2),{k,1,n-1}];

eulermacbasel[n_,p_]:=1/n+1/(2n^2)-Sum[BernoulliB[k](-1)^(k-1)n^(-k-1),{k,2,p}];(*this gives the Euler-Maclaurin formula estimate for the required value*)

articlebound[n_,p_]:=2 n^(-p-2)(p+1)!/(2 Pi)^(2(p+1));(*this, presumably, is an upper bound for Abs[exactbasel[n,p]-eulermacbasel[n,p]] *)

suggestedbound[n_,p_]:=2 n^(-p-2)(p+1)!/(2 Pi)^(p+1);(*if the 2 in 2(p+1) above is a typo, this corrects it.*)

Table[(exactbasel[10, 2 s] - eulermacbasel[10, 2 s])/(articlebound[10, 2 s]), {s, 1, 3}]

$\{160000/3 \pi^6 (-(52229957/31752000) + \pi^2/6),12800000/3 \pi^10 (-(3264371651/1984500000) + \pi^2/6),10240000000/63 \pi^14 (-(130574866229/79380000000) + \pi^2/6)\}$

Numerically this gives {-16.971, 938.319, -48336.7}. If the article were free of typos, these numbers would all be within [-1,1]. With the suggested correction, we would have had {-0.0684174, 0.0958189, -0.125031}, and taking matters further, setting p=100 while keeping sufficient precision, gives N[(exactbasel[10,100]-eulermacbasel[10,100])/suggestedbound[10,100],1000] of roughly 0.44619811. —Preceding unsigned comment added by Historygamer (talk • contribs) 18:14, 7 April 2009 (UTC)


 * Yes, it is wrong and the error is similar to what you say. The problem is that the expansion is quoted with the zero odd terms included but the coefficient on the error term comes from a version which includes only the even terms.  I suspect p must be even, too.  The right way to fix the problem is to give the expansion with only the even terms.  Here is a summary I wrote a few years ago; see note 5.  I propose to use my statement of the theorem in our article. McKay (talk) 12:47, 8 April 2009 (UTC)

Equality or inequality?
The proof by induction following Apostol outlines a proof of an equality. But what is stated in the article in its present form is at most an inequality that is a bound on the error term. Why is that? Michael Hardy (talk) 23:32, 27 November 2009 (UTC)
 * Can you be a bit more specific? An exact expression for the remainder in 'The remainder term' subsection and when you put it together with the formula given you get what's proved in the proof by induction section.--RDBury (talk) 01:16, 30 November 2009 (UTC)

gnh —Preceding unsigned comment added by 124.124.124.75 (talk) 11:24, 11 January 2010 (UTC)

Differentiable / analytic ?
In the beginning of the article, f is said to be smooth and analytic. This is strange, since analytic functions tend to be as smooth as you like them to be. So, what should it be? smooth or analytic? --130.83.2.27 (talk) 07:33, 5 September 2013 (UTC)

Euler-Maclaurin formula and eigenstates.
The " derivation by functional analysis" of Euler-Maclaurin formula is not correct. One would need an a priori proof that the distributions \tilde{B_n} are dual to the Bernoulli polynomials, if one wants to derive Euler-Maclaurin formula from this fact. Moreover, the computation of the remainder term is plainly missing!

Actually, it goes the other way round, as pointed in the referenced article ^ Pierre Gaspard, "r-adic one-dimensional maps and the Euler summation formula", Journal of Physics A, 25 (letter) L483–L485 (1992). (Describes the eigenfunctions of the transfer operator for the Bernoulli map). In this article, Euler-Maclaurin formula is used to find the dual eigenstates. (Nberline (talk) 14:08, 17 January 2014 (UTC)).

Site is Now Broken
The Site now has many formulas interspersed with parse errors. It is no longer useable. I tested it on two computers with different Browsers, Chrome and Firefox. Either something has changed in the Browsers or something is wrong with the site. I suspect glitches not vandalism. I would edit the site if I knew the syntax well enough. Maybe this weekend. — Preceding unsigned comment added by Michael.a.cohen (talk • contribs) 15:14, 7 February 2014 (UTC)

More compactly...
The part "More compactly..." at the end of the section "The formula", with its negative derivative and introduction of a second type of Bernoulli number, seems to me just a notational trick that serves no expository purpose and provides no insight. I propose to delete it. McKay (talk) 03:31, 21 July 2014 (UTC)
 * There being no dissent, I'm removing it. McKay (talk) 02:40, 23 July 2014 (UTC)

"Derivation by functional analysis" section
I'm trying to understand what is of value in the "Derivation by functional analysis" section and having difficulty coming up with much. It is essentially uncited except for a reference to a paper in J. Physics A that supports it only slightly. It starts with an unsupported claim about "curious application" and a mention of Banach spaces that is never explained. Then it jumps into derivatives of delta functions with a vague handwave at what they are. Then it has erroneous statements like "Essentially, Euler-MacLaurin summation can be applied whenever Carlson's theorem holds" (that might be needed for the proof here, but the E-M formula with error term needs nothing more than sufficient continuous differentiability). Then a strange statement "the Euler-MacLaurin formula is essentially a result obtaining from the study of finite differences and Newton series" which seems to me entirely meaningless. It also has "This is the essentially the reason for the restriction to exponential type of less than 2π" but the condition for Carlson's theorem is exponential type (restricted to the imaginary axis) less than π, not less than 2π. It all smells like original research. McKay (talk) 06:08, 23 July 2014 (UTC)
 * Also see the comments above by Nberline. McKay (talk) 03:10, 29 July 2014 (UTC)
 * There being no dissent, I'm removing it. McKay (talk) 03:08, 29 July 2014 (UTC)

Bound for $$|B_k(x)|$$
In the section "The remainder term", the inequality $$|B_k(x)|\le \frac{2\cdot k!}{(2\pi)^k}\zeta(k)$$ makes little sense when $$k=1$$, given that the zeta function has a pole there. I'd suggest to replace "When $$k>0$$ …" with "When $$k\ge 2$$ …" and perhaps note that, by inspection, $$|B_1(x)|\le \frac12$$ for $$0\le x\le 1$$. I hope this is otherwise correct – without a reference given, I can only guess that the claim comes from Lehmer(1040). Hagman (talk) 12:39, 10 March 2024 (UTC)