Summation by parts

In mathematics, summation by parts transforms the summation of products of sequences into other summations, often simplifying the computation or (especially) estimation of certain types of sums. It is also called Abel's lemma or Abel transformation, named after Niels Henrik Abel who introduced it in 1826.

Statement
Suppose $$\{f_k\}$$ and $$\{g_k\}$$ are two sequences. Then,
 * $$\sum_{k=m}^n f_k(g_{k+1}-g_k) = \left(f_{n+1}g_{n+1} - f_m g_m\right) - \sum_{k=m}^n g_{k+1}(f_{k+1}- f_{k}).$$

Using the forward difference operator $$\Delta$$, it can be stated more succinctly as


 * $$\sum_{k=m}^n f_k\Delta g_k = \left(f_{n+1} g_{n+1} - f_m g_m\right) - \sum_{k=m}^{n} g_{k+1}\Delta f_k,$$

Summation by parts is an analogue to integration by parts:
 * $$\int f\,dg = f g - \int g\,df,$$

or to Abel's summation formula:
 * $$\sum_{k=m+1}^n f(k)(g_{k}-g_{k-1})= \left(f(n)g_{n} - f(m) g_m\right) - \int_{m}^n g_{\lfloor t \rfloor} f'(t) dt.$$

An alternative statement is
 * $$f_n g_n - f_m g_m = \sum_{k=m}^{n-1} f_k\Delta g_k + \sum_{k=m}^{n-1} g_k\Delta f_k + \sum_{k=m}^{n-1} \Delta f_k \Delta g_k$$

which is analogous to the integration by parts formula for semimartingales.

Although applications almost always deal with convergence of sequences, the statement is purely algebraic and will work in any field. It will also work when one sequence is in a vector space, and the other is in the relevant field of scalars.

Newton series
The formula is sometimes given in one of these - slightly different - forms


 * $$\begin{align}

\sum_{k=0}^n f_k g_k &= f_0 \sum_{k=0}^n g_k+ \sum_{j=0}^{n-1} (f_{j+1}-f_j) \sum_{k=j+1}^n g_k\\ &= f_n \sum_{k=0}^n g_k - \sum_{j=0}^{n-1} \left( f_{j+1}- f_j\right) \sum_{k=0}^j g_k, \end{align}$$

which represent a special case ($$M = 1$$) of the more general rule


 * $$\begin{align}

\sum_{k=0}^n f_k g_k &= \sum_{i=0}^{M-1} f_0^{(i)} G_{i}^{(i+1)}+ \sum_{j=0}^{n-M} f^{(M)}_{j} G_{j+M}^{(M)} = \\ &= \sum_{i=0}^{M-1} \left( -1 \right)^i f_{n-i}^{(i)} \tilde{G}_{n-i}^{(i+1)}+ \left( -1 \right) ^{M} \sum_{j=0}^{n-M} f_j^{(M)} \tilde{G}_j^{(M)}; \end{align}$$

both result from iterated application of the initial formula. The auxiliary quantities are Newton series:


 * $$f_j^{(M)}:= \sum_{k=0}^M \left(-1 \right)^{M-k} {M \choose k} f_{j+k}$$

and
 * $$G_j^{(M)}:= \sum_{k=j}^n {k-j+M-1 \choose M-1} g_k,$$
 * $$\tilde{G}_j^{(M)}:= \sum_{k=0}^j {j-k+M-1 \choose M-1} g_k.$$

A particular ($$M=n+1$$) result is the identity
 * $$\sum_{k=0}^n f_k g_k = \sum_{i=0}^n f_0^{(i)} G_i^{(i+1)} = \sum_{i=0}^n (-1)^i f_{n-i}^{(i)} \tilde{G}_{n-i}^{(i+1)}.$$

Here, ${n \choose k}$ is the binomial coefficient.

Method
For two given sequences $$(a_n) $$ and $$(b_n) $$, with $$n \in \N$$, one wants to study the sum of the following series: $$S_N = \sum_{n=0}^N a_n b_n$$

If we define $B_n = \sum_{k=0}^n b_k,$ then for every $$n>0, $$ $$b_n = B_n - B_{n-1} $$ and $$S_N = a_0 b_0 + \sum_{n=1}^N a_n (B_n - B_{n-1}),$$ $$S_N = a_0 b_0 - a_1 B_0 + a_N B_N + \sum_{n=1}^{N-1} B_n (a_n - a_{n+1}).$$

Finally $S_N = a_N B_N - \sum_{n=0}^{N-1} B_n (a_{n+1} - a_n).$

This process, called an Abel transformation, can be used to prove several criteria of convergence for $$S_N $$.

Similarity with an integration by parts
The formula for an integration by parts is $\int_a^b f(x) g'(x)\,dx = \left[ f(x) g(x) \right]_{a}^{b} - \int_a^b f'(x) g(x)\,dx$.

Beside the boundary conditions, we notice that the first integral contains two multiplied functions, one which is integrated in the final integral ($$g' $$ becomes $$g $$) and one which is differentiated ($$f $$ becomes $$f' $$).

The process of the Abel transformation is similar, since one of the two initial sequences is summed ($$b_n $$ becomes $$B_n $$) and the other one is differenced ($$a_n $$ becomes $$a_{n+1} - a_n $$).

Applications

 * It is used to prove Kronecker's lemma, which in turn, is used to prove a version of the strong law of large numbers under variance constraints.
 * It may be used to prove Nicomachus's theorem that the sum of the first $$n$$ cubes equals the square of the sum of the first $$n$$ positive integers.
 * Summation by parts is frequently used to prove Abel's theorem and Dirichlet's test.
 * One can also use this technique to prove Abel's test: If $\sum_n b_n$  is a convergent series, and $$a_n$$ a bounded monotone sequence, then $S_N = \sum_{n=0}^N a_n b_n$  converges.

Proof of Abel's test. Summation by parts gives $$\begin{align} S_M - S_N &= a_M B_M - a_N B_N - \sum_{n=N}^{M-1} B_n (a_{n+1} - a_n)\\ &= (a_M-a) B_M - (a_N-a) B_N + a(B_M - B_N) - \sum_{n=N}^{M-1} B_n (a_{n+1} - a_n), \end{align}$$ where a is the limit of $$a_n$$. As $\sum_n b_n$ is convergent, $$B_N$$ is bounded independently of $$N$$, say by $$B$$. As $$a_n-a$$ go to zero, so go the first two terms. The third term goes to zero by the Cauchy criterion for $\sum_n b_n$. The remaining sum is bounded by $$\sum_{n=N}^{M-1} |B_n| |a_{n+1}-a_n| \le B \sum_{n=N}^{M-1} |a_{n+1}-a_n| = B|a_N - a_M|$$ by the monotonicity of $$a_n$$, and also goes to zero as $$N \to \infty$$.

Using the same proof as above, one can show that if then $S_N = \sum_{n=0}^N a_n b_n$ converges.
 * 1) the partial sums $$B_N$$ form a bounded sequence independently of $$N$$;
 * 2) $$\sum_{n=0}^\infty |a_{n+1} - a_n| < \infty$$ (so that the sum $$\sum_{n=N}^{M-1} |a_{n+1}-a_n|$$ goes to zero as $$N$$ goes to infinity)
 * 3) $$\lim a_n = 0$$

In both cases, the sum of the series satisfies:$$ |S| = \left|\sum_{n=0}^\infty a_n b_n \right| \le B \sum_{n=0}^\infty |a_{n+1}-a_n|.$$

Summation-by-parts operators for high order finite difference methods
A summation-by-parts (SBP) finite difference operator conventionally consists of a centered difference interior scheme and specific boundary stencils that mimics behaviors of the corresponding integration-by-parts formulation. The boundary conditions are usually imposed by the Simultaneous-Approximation-Term (SAT) technique. The combination of SBP-SAT is a powerful framework for boundary treatment. The method is preferred for well-proven stability for long-time simulation, and high order of accuracy.