User:Jamesmath/sandbox

approximation of a first derivative
The defition of the derivative of a function gives the first and simplest finite difference.

$$f^\prime (x_0)\;=\;\lim_{x_1 \to x_0} \frac{f(x_1)-f(x_0)}{x_1-x_0} \quad \text{or} \quad f^\prime (x_0)\;=\;\lim_{h \to 0} \frac{f(x_0+h)-f(x_0)}{h}$$.

So the finite difference

$$\frac{d_h}{d_h x}\,f(x)\;=\;\frac{f(x+h)-f(x)}{h}\;=\;h^{-1}\,(f(x+h)-f(x))$$

can be defined. It is an approximation to $$f^\prime (x)$$ when $$h$$ is near $$0$$.

The finite difference approximation $$\frac{d_h^{\,n}}{d_h x^n}\,f(x)$$ for $$f^{(\,n)} (x)$$ is said to be of order $$k$$, if there exists  $$M\;>\;0$$  such that

$$\left\vert \frac{d_h^{\,n}}{d_h x^n}\,f(x) - f^{(\,n)} (x)\right\vert\;\le\;M\,h^k$$,

when $$h$$ is near $$0$$.

For practical reasons the order of a finite difference will be described under the assumption that $$f(x)$$  is sufficiently smooth so that it's Taylor's expansion up to some order exists. For example, if

$$f(x+h)\;=\;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(z_h)\,h^2$$

then

$$\frac{f(x+h)-f(x)}{h}\;=\;f^\prime (x)+\tfrac{1}{2}\,f^{\prime\prime}(z_h)\,h$$

so that

$$\frac{d_h}{d_h x}\,f(x)-f^\prime (x)\;=\;\tfrac{1}{2}\,f^{\prime\prime}(z_h)\,h$$,

meaning that the order of the approximation of $$f^\prime (x)$$ by  $$\frac{d_h}{d_h x}\,f(x)$$  is  $$1$$.

The finite difference so far defined is a 2-point operator, since it requires 2 evaluations of $$f(x)$$. If

$$f(x+h)\;=\;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,f^{\prime\prime\prime}(z_h)\,h^3$$

then another 2-point operator

$$\frac{d_h}{d_h x}\,f(x)\;=\;\frac{f(x+h)-f(x-h)}{2\,h}\;=\;(2\,h)^{-1}\,(f(x+h)-f(x-h))$$

can be defined. Since

$$\frac{f(x+h)-f(x-h)}{2\,h}\;=\;f^\prime (x)+\tfrac{1}{2}\,(\tfrac{1}{6}\,f^{\prime\prime\prime}(z_h)+\tfrac{1}{6}\,f^{\prime\prime\prime}(z_{-h}))\,h^2$$,

this $$\frac{d_h}{d_h x}\,f(x)$$  is of order 2, and is referred to as a centered difference operator. Centered difference operators are usually one order of accuracy higher than un-centered operators for the same number of points. More generally, for $$m$$  points $$x+\alpha_1\,h\,,\;x+\alpha_2\,h\,,\;\ldots\;,\;x+\alpha_m\,h$$ a finite difference operator

$$\frac{d_h}{d_h x}\,f(x)\;=\;h^{-1}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+\;\ldots\;+a_m\,f(x+\alpha_m\,h))$$

is usually defined by choosing the coefficients $$a_1\,,\;a_2\,,\;\ldots,\;a_m$$ so that $$\frac{d_h}{d_h x}\,f(x)$$ has as high of an order of accuracy as possible. Considering

$$\begin{align} f(x+h)\; & = \;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,f^{\prime\prime\prime}(x)\,h^3 \\ & +\;\ldots\;+\tfrac{1}{(m-1)!}\,f^{(m-1)}(x)\,h^{m-1}+\tfrac{1}{m!}\,f^{(m)}(z_h)\,h^{m} \\ \end{align}$$.

Then

$$\begin{align} \frac{d_h}{d_h x}\,f(x)\; & = \;h^{-1}\,(c_1\,f(x)+c_2\,f^\prime (x)\,h+\tfrac{1}{2}\,c_3\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,c_4\,f^{\prime\prime\prime}(x)\,h^3 \\ & +\;\ldots\;+\tfrac{1}{(m-1)!}\,c_m\,f^{(m-1)}(x)\,h^{m-1}+\tfrac{1}{m!}\,R_m\,h^{m}) \\ \end{align}$$.

where the $$c_1\,,\;c_2\,,\;\ldots,\;c_m$$  are the right hand side of the Vandermonde system

$$ \begin{bmatrix} 1 & 1 & \cdots & 1 & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_{m-1} & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{m-1}^2 & \alpha_m^2 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ \alpha_1^{m-2} & \alpha_2^{m-2} & \cdots & \alpha_{m-1}^{m-2} & \alpha_m^{m-2} \\ \alpha_1^{m-1} & \alpha_2^{m-1} & \cdots & \alpha_{m-1}^{m-1} & \alpha_m^{m-1} \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{m-1} \\ a_{m} \\ \end{bmatrix} \quad = \quad \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ \vdots \\ c_{m-1} \\ c_{m} \\ \end{bmatrix} $$

and

$$\begin{align} R_m\;= & \;a_1\,\alpha_1^m\,f^{(m)}(z_{\,\alpha_1\,h})+a_2\,\alpha_2^m\,f^{(m)}(z_{\,\alpha_2\,h})+a_3\,\alpha_3^m\,f^{(m)}(z_{\,\alpha_3\,h}) \\ & +\;\ldots\;+a_{m-1}\,\alpha_{m-1}^m\,f^{(m)}(z_{\,\alpha_{m-1}\,h})+a_m\,\alpha_m^m\,f^{(m)}(z_{\,\alpha_m\,h}) \\ \end{align}$$.

When the $$a_i\,\text{'s}$$  are chosen so that

$$c_1\;=\;0\,,\;\;c_2\;=\;1\,,\;\;c_3\;=\;\ldots\;=\;c_m\;=\;0$$

then

$$ \frac{d_h}{d_h x}\,f(x)\; = \;f^\prime (x)+\tfrac{1}{m!}\,R_m\,h^{m-1} $$.

so that the operator is of order $$m-1$$.

At the end of the section a table for the first several values of $$m$$, the number of points, will be provided. The discussion will move on to the approximation of the second derivative.

approximation of a second derivative
The defition of the second derivative of a function

$$f^{\prime\prime}(x)\;=\;\lim_{h \to 0} \frac{f^{\prime}(x+h)-f^{\prime}(x)}{h}$$.

used together with the finite difference approximation for the first derivative

$$\frac{d_h}{d_h x}\,f(x)\;=\;\frac{f(x+h)-f(x)}{h}\;=\;h^{-1}\,(f(x+h)-f(x))$$

gives the finite difference

$$\begin{align} \frac{d_h^2}{d_h x^2}\,f(x)\;= & \;h^{-1}\,\big (h^{-1}\,(f(x+2\,h)-f(x+h))-h^{-1}\,(f(x+h)-f(x))\big) \\ = & \;h^{-2}\,(f(x+2\,h)-2\,f(x+h)+f(x))) \\ \end{align}$$

In view of

$$f(x+h)\;=\;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,f^{\prime\prime\prime}(z_h)\,h^3$$

for the operator just defined

$$ \frac{d_h^2}{d_h x^2}\,f(x)\;=\;f^{\prime\prime}(x)+(\tfrac{4}{3}\,f^{\prime\prime\prime}(z_{\,2\,h})-\tfrac{1}{3}\,f^{\prime\prime\prime}(z_{h}))\,h $$.

If instead, the difference operator

$$\frac{d_h}{d_h x}\,f(x)\;=\;(2\,h)^{-1}\,(f(x+h)-f(x-h))$$

is used

$$\begin{align} \frac{d_h^2}{d_h x^2}\,f(x)\;= & \;h^{-1}\,\big ((2\,h)^{-1}\,(f(x+2\,h)-f(x))-(2\,h)^{-1}\,(f(x+h)-f(x-h))\big) \\ = & \;\tfrac{1}{2}\,h^{-2}\,(f(x+2\,h)-f(x+h)-f(x)+f(x-h)) \\ & \; \\ = & \;f^{\prime\prime}(x)+(\tfrac{2}{3}\,f^{\prime\prime\prime}(z_{\,2\,h})-\tfrac{1}{12}\,f^{\prime\prime\prime}(z_{h})-\tfrac{1}{12}\,f^{\prime\prime\prime}(z_{-h}))\,h \\ \end{align}$$

If the other obvious possibility is tried

$$\begin{align} \frac{d_h^2}{d_h x^2}\,f(x)\;= & \;h^{-1}\,\big (h^{-1}\,(f(x+h)-f(x))-h^{-1}\,(f(x)-f(x-h))\big) \\ = & \;h^{-2}\,(f(x+h)-2\,f(x)+f(x-h)) \\ \end{align}$$

In view of

$$f(x+h)\;=\;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,f^{\prime\prime\prime}(x)\,h^3+\tfrac{1}{24}\,f^{(iv)}(z_h)\,h^4$$,

$$ \frac{d_h^2}{d_h x^2}\,f(x)\;=\;f^{\prime\prime}(x)+(\tfrac{1}{24}\,f^{(iv)}(z_{h})+\tfrac{1}{24}\,f^{(iv)}(z_{-h}))\,h^2 $$.

So $$\frac{d_h^2}{d_h x^2}\,f(x)\;=\;h^{-2}\,(f(x+h)-2\,f(x)+f(x-h))$$ is a second order centered finite difference approximation for $$f^{\prime\prime}(x)$$. The reasoning applied to the approximation of a first derivative can be used for the second derivative with only a few modifications.

For $$m$$  points $$x+\alpha_1\,h\,,\;x+\alpha_2\,h\,,\;\ldots\;,\;x+\alpha_m\,h$$ a finite difference operator

$$\frac{d_h^2}{d_h x^2}\,f(x)\;=\;h^{-2}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+\;\ldots\;+a_m\,f(x+\alpha_m\,h))$$

is usually defined by choosing the coefficients $$a_1\,,\;a_2\,,\;\ldots,\;a_m$$ so that $$\frac{d_h^2}{d_h x^2}\,f(x)$$ has as high of an order of accuracy as possible. Considering

$$\begin{align} f(x+h)\; & = \;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,f^{\prime\prime\prime}(x)\,h^3 \\ & +\;\ldots\;+\tfrac{1}{(m-1)!}\,f^{(m-1)}(x)\,h^{m-1}+\tfrac{1}{m!}\,f^{(m)}(z_h)\,h^{m} \\ \end{align}$$.

Then

$$\begin{align} \frac{d_h^2}{d_h x^2}\,f(x)\; & = \;h^{-2}\,(c_1\,f(x)+c_2\,f^\prime (x)\,h+\tfrac{1}{2}\,c_3\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,c_4\,f^{\prime\prime\prime}(x)\,h^3 \\ & +\;\ldots\;+\tfrac{1}{(m-1)!}\,c_m\,f^{(m-1)}(x)\,h^{m-1}+\tfrac{1}{m!}\,R_m\,h^{m}) \\ \end{align}$$.

where the $$c_1\,,\;c_2\,,\;\ldots,\;c_m$$  are the right hand side of the Vandermonde system

$$ \begin{bmatrix} 1 & 1 & \cdots & 1 & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_{m-1} & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{m-1}^2 & \alpha_m^2 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ \alpha_1^{m-2} & \alpha_2^{m-2} & \cdots & \alpha_{m-1}^{m-2} & \alpha_m^{m-2} \\ \alpha_1^{m-1} & \alpha_2^{m-1} & \cdots & \alpha_{m-1}^{m-1} & \alpha_m^{m-1} \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{m-1} \\ a_{m} \\ \end{bmatrix} \quad = \quad \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ \vdots \\ c_{m-1} \\ c_{m} \\ \end{bmatrix} $$

and

$$\begin{align} R_m\;= & \;a_1\,\alpha_1^m\,f^{(m)}(z_{\,\alpha_1\,h})+a_2\,\alpha_2^m\,f^{(m)}(z_{\,\alpha_2\,h})+a_3\,\alpha_3^m\,f^{(m)}(z_{\,\alpha_3\,h}) \\ & +\;\ldots\;+a_{m-1}\,\alpha_{m-1}^m\,f^{(m)}(z_{\,\alpha_{m-1}\,h})+a_m\,\alpha_m^m\,f^{(m)}(z_{\,\alpha_m\,h}) \\ \end{align}$$.

When the $$a_i\,\text{'s}$$  are chosen so that

$$c_1\;=\;0\,,\;\;c_2\;=\;0\,,\;\;c_3\;=\;2\,,\;\;c_4\;=\;\ldots\;=\;c_m\;=\;0$$

then

$$ \frac{d_h^2}{d_h x^2}\,f(x)\; = \;f^{\prime\prime} (x)+\tfrac{1}{m!}\,R_m\,h^{m-2} $$.

so that the operator is of order $$m-2$$.

The effect of centering the points will be covered somewhere below.

approximation of higher derivatives
Although approximations to higher derivatives can be defined recursively from those for derivatives of lower order, the end result is the same finite difference operators. The Vandermonde type system will be used again for this purpose.

$$f^{(\,n)}(x)\;=\;\lim_{h \to 0} \frac{f^{\,(n-1)}(x+h)-f^{\,(n-1)}(x)}{h}$$.

The number of points needed to approximate $$f^{(\,n)}(x)$$ by finite differences is at least  $$n+1$$.

For $$m$$  points $$x+\alpha_1\,h\,,\;x+\alpha_2\,h\,,\;\ldots ,\;x+\alpha_m\,h$$ a finite difference operator

$$\frac{d_h^{\,n}}{d_h x^n}\,f(x)\;=\;h^{-n}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+\;\ldots\;+a_m\,f(x+\alpha_m\,h))$$

is usually defined by choosing the coefficients $$a_1\,,\;a_2\,,\;\ldots,\;a_m$$ so that $$\frac{d_h^{\,n}}{d_h x^n}\,f(x)$$ approximates $$f^{(\,n)}(x)$$ to as high of an order of accuracy as possible. Considering

$$\begin{align} f(x+h)\; = & \;f(x)+f^\prime (x)\,h+\tfrac{1}{2}\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,f^{\prime\prime\prime}(x)\,h^3 \\ & +\;\ldots\;+\tfrac{1}{(m-1)!}\,f^{(m-1)}(x)\,h^{m-1}+\tfrac{1}{m!}\,f^{(m)}(z_h)\,h^{m} \\ \end{align}$$.

Then

$$\begin{align} & \frac{d_h^{\,n}}{d_h x^n}\,f(x)\; = \;h^{-n}\,(c_1\,f(x)+c_2\,f^\prime (x)\,h+\tfrac{1}{2}\,c_3\,f^{\prime\prime}(x)\,h^2+\tfrac{1}{6}\,c_4\,f^{\prime\prime\prime}(x)\,h^3+ \\ & \ldots +\tfrac{1}{n!}\,c_{n+1}\,f^{(\,n)}(x)\,h^{n}+\;\ldots\;+\tfrac{1}{(m-1)!}\,c_m\,f^{(m-1)}(x)\,h^{m-1}+\tfrac{1}{m!}\,R_m\,h^{m}) \\ \end{align}$$.

where the $$c_1\,,\;c_2\,,\;\ldots,\;c_m$$  are the right hand side of the Vandermonde system

$$ \begin{bmatrix} 1 & 1 & \cdots & 1 & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_{m-1} & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{m-1}^2 & \alpha_m^2 \\ \vdots & \vdots & \cdots & \vdots & \vdots \\ \alpha_1^{m-2} & \alpha_2^{m-2} & \cdots & \alpha_{m-1}^{m-2} & \alpha_m^{m-2} \\ \alpha_1^{m-1} & \alpha_2^{m-1} & \cdots & \alpha_{m-1}^{m-1} & \alpha_m^{m-1} \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_{m-1} \\ a_{m} \\ \end{bmatrix} \quad = \quad \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ \vdots \\ c_{m-1} \\ c_{m} \\ \end{bmatrix} $$

and

$$\begin{align} R_m\;= & \;a_1\,\alpha_1^m\,f^{(m)}(z_{\,\alpha_1\,h})+a_2\,\alpha_2^m\,f^{(m)}(z_{\,\alpha_2\,h})+a_3\,\alpha_3^m\,f^{(m)}(z_{\,\alpha_3\,h}) \\ & +\;\ldots\;+a_{m-1}\,\alpha_{m-1}^m\,f^{(m)}(z_{\,\alpha_{m-1}\,h})+a_m\,\alpha_m^m\,f^{(m)}(z_{\,\alpha_m\,h}) \\ \end{align}$$.

When the $$a_i\,\text{'s}$$  are chosen so that

$$c_{n+1}\;=\;n!\,,\;\;\text{and}\;\;c_i\;=\;0\,,\quad\text{for}\;\;i\;\ne\;n+1$$

then

$$ \frac{d_h^{\,n}}{d_h x^n}\,f(x)\; = \;f^{(\,n)} (x)+\tfrac{1}{m!}\,R_m\,h^{m-n} $$.

so that the operator is of order $$m-n$$.

An alternative analysis is to require that the finite difference operator differentiates powers of $$x$$  exactly, up to the highest power possible.

effect of the placement of points
Usually the $$\alpha_i \text{'s}$$  are taken to be integer valued, since the points are intended to coincide with those of some division of an interval or 2 or 3 dimensional domain. If these points and hence $$\alpha_i \text{'s}$$ are chosen with only accuracy in mind, then a higher accuracy of only one order can be achieved.

So start by seeing how high is the accuracy that $$f^\prime (x)$$ can be approximated with three points.

$$\frac{d_h}{d_h x}\,f(x)\;=\;h^{-1}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+a_3\,f(x+\alpha_3\,h))$$

Then accuracy of order 4 can not be achieved, because it would require the solution of

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \alpha_1^4 & \alpha_2^4 & \alpha_3^4 \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix} $$

which can not be solved since the matrix

$$ \begin{bmatrix} \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \alpha_1^4 & \alpha_2^4 & \alpha_3^4 \\ \end{bmatrix} $$

is non-singular. The possibility of an $$\alpha_i$$ being $$0$$ can be ruled out otherwise.

For accuracy of order 3

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ \end{bmatrix} $$

So the matrix

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \end{bmatrix} $$

is singular and $$\alpha_1\,,\;\alpha_2\,,\;\alpha_3$$ are the roots of some polynomial $$p(\alpha )\;=\;\alpha^3+b_1\,\alpha^2+b_0$$.

Two examples are next.

$$\frac{d_h}{d_h x}\,f(x)\;=\;h^{-1}\,(-(\tfrac{\sqrt 3}{3}+\tfrac{1}{2})\,f(x+(\tfrac{\sqrt 3}{3}-1)\,h)+2\,\tfrac{\sqrt 3}{3}\,f(x+\tfrac{\sqrt 3}{3}\,h)-(\tfrac{\sqrt 3}{3}-\tfrac{1}{2})\,f(x+(\tfrac{\sqrt 3}{3}+1)\,h))$$ $$\frac{d_h}{d_h x}\,f(x)\;=\;h^{-1}\,(-\tfrac{9}{2}\,f(x-\tfrac{1}{2}\,h)+\tfrac{16}{3}\,f(x+\tfrac{3}{4}\,h)-\tfrac{5}{6}\,f(x+\tfrac{3}{2}\,h))$$

To see what the accuracy that $$f^{\prime\prime} (x)$$ can be approximated to with three points.

$$\frac{d_h^2}{d_h x^2}\,f(x)\;=\;h^{-2}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+a_3\,f(x+\alpha_3\,h))$$

Then accuracy of order 3 can not be achieved, because it would require the solution of

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \alpha_1^4 & \alpha_2^4 & \alpha_3^4 \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 0 \\ 2 \\ 0 \\ 0 \\ \end{bmatrix} $$

which can not be solved since the matrices

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \end{bmatrix} $$    and $$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^4 & \alpha_2^4 & \alpha_3^4 \\ \end{bmatrix} $$

would both need to be singular.

If the matrix

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \end{bmatrix} $$

is singular, then $$\alpha_1\,,\;\alpha_2\,,\;\alpha_3$$ are the roots of some polynomial $$p(\alpha )\;=\;\alpha^3+b_1\,\alpha+b_0$$ ,

implying

$$ \begin{bmatrix} \alpha_1^4 & \alpha_2^4 & \alpha_3^4 \\ \end{bmatrix} \;\; = \;\; - b_1\, \begin{bmatrix} \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \end{bmatrix} \; - \; b_0\, \begin{bmatrix} \alpha_1 & \alpha_2 & \alpha_3 \\ \end{bmatrix} $$

meaning that elementary row operations can transform

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^4 & \alpha_2^4 & \alpha_3^4 \\ \end{bmatrix} $$

to

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \end{bmatrix} $$

which is non-singular.

Conversely, if $$\alpha_1\,,\;\alpha_2\,,\;\alpha_3$$ are the roots of some polynomial $$p(\alpha )\;=\;\alpha^3+b_1\,\alpha+b_0$$, then

$$ \begin{bmatrix} 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 0 \\ 2 \\ 0 \\ \end{bmatrix} $$

can be solved and $$f^{\prime\prime}(x)$$  approximated to an order of 2 accuracy.

See how high is the accuracy that $$f^\prime (x)$$ can be approximated with $$m$$  points.

$$\frac{d_h}{d_h x}\,f(x)\;=\;h^{-1}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+\;\ldots\;+a_m\,f(x+\alpha_m\,h))$$

Then accuracy of order $$m+1$$  can not be achieved, because it would require the solution of

$$ \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 & \cdots & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \alpha_1^m & \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \alpha_1^{m+1} & \alpha_2^{m+1} & \alpha_3^{m+1} & \cdots & \alpha_m^{m+1} \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_m \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} $$

which can not be solved since the matrix

$$ \begin{bmatrix} \alpha_1^2 & \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \alpha_1^m & \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \alpha_1^{m+1} & \alpha_2^{m+1} & \alpha_3^{m+1} & \cdots & \alpha_m^{m+1} \\ \end{bmatrix} $$

is non-singular. The possibility of an $$\alpha_i$$ being $$0$$ can be ruled out otherwise, because, for example, if $$\alpha_1\;=\;0$$,  then the non-singularity of the block

$$ \begin{bmatrix} \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \vdots & \vdots & \cdots & \vdots \\ \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \end{bmatrix} $$

would force $$a_2\;=\;a_3\;\ldots\;=\;a_m\;=\;0$$.

For accuracy of order $$m$$

$$ \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 & \cdots & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \alpha_1^m & \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \vdots \\ a_m \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} $$

So the matrix

$$ \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \alpha_1^m & \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \end{bmatrix} $$

is singular and $$\alpha_1\,,\;\alpha_2\,,\;\alpha_3\,,\;\ldots ,\;\alpha_m$$ are the roots of some polynomial $$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_2\,\alpha^3+b_1\,\alpha^2+b_0$$.

The progression for the second, third, ... derivatives goes as follows.

If $$\alpha_1\,,\;\alpha_2\,,\;\alpha_3\,,\;\ldots ,\;\alpha_m$$ are the roots of some polynomial $$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_2\,\alpha^3+b_1\,\alpha+b_0$$ then the system

$$ \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 & \cdots & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \alpha_1^m & \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ a_4 \\ \vdots \\ a_m \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 0 \\ 2 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} $$

can be solved, and

$$\frac{d_h^2}{d_h x^2}\,f(x)\;=\;h^{-2}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+\;\ldots\;+a_m\,f(x+\alpha_m\,h))$$

approximates $$f^{\prime\prime}(x)$$  to an order of accuracy of $$m-1$$.

If $$\alpha_1\,,\;\alpha_2\,,\;\alpha_3\,,\;\ldots ,\;\alpha_m$$ are the roots of some polynomial $$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_3\,\alpha^4+b_2\,\alpha^2+b_1\,\alpha+b_0$$ then the system

$$ \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \alpha_3 & \cdots & \alpha_m \\ \alpha_1^2 & \alpha_2^2 & \alpha_3^2 & \cdots & \alpha_m^2 \\ \alpha_1^3 & \alpha_2^3 & \alpha_3^3 & \cdots & \alpha_m^3 \\ \alpha_1^4 & \alpha_2^4 & \alpha_3^4 & \cdots & \alpha_m^4 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ \alpha_1^m & \alpha_2^m & \alpha_3^m & \cdots & \alpha_m^m \\ \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ a_4 \\ a_5 \\ \vdots \\ a_m \\ \end{bmatrix} \quad = \quad \begin{bmatrix} 0 \\ 0 \\ 0 \\ 6 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} $$

can be solved, and

$$\frac{d_h^3}{d_h x^3}\,f(x)\;=\;h^{-3}\,(a_1\,f(x+\alpha_1\,h)+a_2\,f(x+\alpha_2\,h)+\;\ldots\;+a_m\,f(x+\alpha_m\,h))$$

approximates $$f^{\prime\prime\prime}(x)$$  to an order of accuracy of $$m-2$$.

Now, the analysis is not quite done yet. Returning to the approximation of $$f^{\prime\prime}(x)$$. If for the polynomial

$$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_2\,\alpha^3+b_1\,\alpha+b_0$$

it were that $$b_1\;=\;0$$,  then the system can be solved for one more order of accuracy. So the question arises as to whether polynomials of the form

$$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_2\,\alpha^3+b_0$$

exist that have $$m$$  distinct real roots. When $$m\;=\;3$$ there is not. So consider $$m\;=\;4$$.

$$p(\alpha )\;=\;\alpha^4+b_2\,\alpha^3+b_0$$

If $$p(\alpha )$$  has 4 distinct real roots, then

$$p^\prime (\alpha )\;=\;4\,\alpha^3+3\,b_2\,\alpha^2$$

has 3 distinct real roots, which it does not. So the order of approximation can not be improved. This is generally the case.

Returning to the approximation of $$f^{\prime\prime\prime}(x)$$. If for the polynomial

$$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_3\,\alpha^4+b_2\,\alpha^2+b_1\,\alpha+b_0$$

it were that $$b_2\;=\;0$$,  then the system can be solved for one more order of accuracy. So the question arises as to whether polynomials of the form

$$p(\alpha )\;=\;\alpha^m+b_{m-2}\,\alpha^{m-1}+\ldots+b_3\,\alpha^4+b_1\,\alpha+b_0$$

exist that have $$m$$  distinct real roots.

If $$p(\alpha )$$  has  $$m$$  distinct real roots, then

$$p^{\prime\prime}(\alpha )\;=\;m(m-1)\,\alpha^{m-2}+(m-1)(m-2)\,b_{m-2}\,\alpha^{m-3}+\ldots+12\,b_3\,\alpha^2$$

has $$m-2$$  distinct real roots, which it does not. So the order of approximation can not be improved.

For functions of a complex variable using roots of unity, for example, may obtain higher orders of approximation, since complex roots are allowed.

Trigometric polynomials
Let

be trigometric polynomial defined on $$-1\;\le\;x\;\le\;1$$.

Define the inner product on such trigometric polynomials by

In light of the orthogonalities

$$<\sin(k\,\pi\,x)\,,\;\sin(r\,\pi\,x)>\;=\;0\,,\quad <\cos(k\,\pi\,x)\,,\;\cos(r\,\pi\,x)>\;=\;0$$,

and $$<\sin(k\,\pi\,x)\,,\;\cos(r\,\pi\,x)>\;=\;0\,,$$ when $$k\;\ne\;r$$,

inner products can be calculated easily.

and for

$$p_1(x)\;=\;a_{0\,,\,1}+\textstyle\sum_{\,k\,=\,1}^{m}(a_{k\,,\,1}\,\cos(k\,\pi\,x)+b_{k\,,\,1}\,\sin(k\,\pi\,x))$$

$$p_2(x)\;=\;a_{0\,,\,2}+\textstyle\sum_{\,k\,=\,1}^{m}(a_{k\,,\,2}\,\cos(k\,\pi\,x)+b_{k\,,\,2}\,\sin(k\,\pi\,x))$$

the inner product is given by

Definition of a shift operator
Define the shift operator $$(sft)_h$$  on  $$p(x)$$  by

Since

$$ \sin(k\,\pi (x-h))\;=\;\cos(k\,\pi\,h)\sin(k\,\pi\,x)- \sin(k\,\pi\,h)\cos(k\,\pi\,x) $$

and

$$ \cos(k\,\pi (x-h))\;=\;\cos(k\,\pi\,h)\cos(k\,\pi\,x)+ \sin(k\,\pi\,h)\sin(k\,\pi\,x) $$,

so that

Approximation by trigometric polynomials
Let $$f(x)$$  be a function defined on and periodic with respect to the interval  $$-1\;\le\;x\;\le\;1$$. That is $$f(x+2)\;=\;f(x)$$.

The $$m\,\text{th}$$  degree trigometric polynomial approximation to  $$f(x)$$  is given by

$$p(x)\;=\;a_0+\textstyle\sum_{\,k\,=\,1}^{m}(a_k\,\cos(k\,\pi\,x)+b_k\,\sin(k\,\pi\,x))$$

where

$$p(x)$$ approximates  $$f(x)$$  in the sense that

$$\int_{-1}^{1}\left\vert f(x)-p(x) \right\vert^2\,dx$$

is minimized over all trigometric polynomials by $$p(x)$$.

In fact

$$\int_{-1}^{1}\left\vert f(x)-p(x) \right\vert^2\,dx\; =\;\int_{-1}^{1}(f(x)-p(x))(\overline{f(x)}-\overline{p(x)})\,dx$$ $$=\;\int_{-1}^{1}(\left\vert f(x) \right\vert^2-2\,\Re (f(x)\overline{p(x)})+\left\vert p(x) \right\vert^2)\,dx$$.

The term in the center

$$\int_{-1}^{1} \Re f(x)\overline{p(x)}\,dx$$ $$=\;\overline{a_0}\int_{-1}^{1}f(x)\,dx+\textstyle\sum_{\,k\,=\,1}^{m}(\overline{a_k}\,\int_{-1}^{1}f(x)\cos(k\,\pi\,x)\,dx+\overline{b_k}\,\int_{-1}^{1}f(x)sin(k\,\pi\,x))\,dx$$

$$ =\;\left\vert a_0 \right\vert^2+\textstyle\sum_{\,k\,=\,1}^{m}(\left\vert a_k \right\vert^2+\left\vert b_k \right\vert^2)\;=\; \int_{-1}^{1} \left\vert p(x) \right\vert^2\,dx $$,

so that

If $$p(x)$$  and  $$q(x)$$  are the  $$m\,\text{th}$$  degree trigometric polynomial approximations to  $$f(x)$$  and  $$g(x)$$,  then the  $$m\,\text{th}$$  degree trigometric polynomial approximation to  $$f(x)+\alpha\,g(x)$$  is given by  $$p(x)+\alpha\,q(x)$$.

This follows immediately from ($$) since

$$\int_{-1}^1\,(f(x)+\alpha\,g(x))\,\cos(k\,\pi\,x)\,dx\;=\;\int_{-1}^1\,f(x)\,\cos(k\,\pi\,x)\,dx+\alpha \int_{-1}^1\,g(x)\,\cos(k\,\pi\,x)\,dx$$ and $$\int_{-1}^1\,(f(x)+\alpha\,g(x))\,\sin(k\,\pi\,x)\,dx\;=\;\int_{-1}^1\,f(x)\,\sin(k\,\pi\,x)\,dx+\alpha \int_{-1}^1\,g(x)\,\sin(k\,\pi\,x)\,dx$$.

Fundamental Property
Generally if $$p(x)$$  is the  $$m\,\text{th}$$  degree trigometric polynomial approximation to a function  $$f(x)$$,  periodic on  $$-1\;\le\;x\;\le\;1$$,  then  $$(sft)_h\,p(x)$$  is the  $$m\,\text{th}$$  degree trigometric polynomial approximation to  $$f(x-h)$$.

To see this calculate the trigometric polynomial approximations for $$f(x-h)$$.

$$ \int_{-1}^1\,f(x-h)\,\cos(k\,\pi\,x)\,dx\; =\;\int_{-1-h}^{1-h}\,f(x)\,\cos(k\,\pi\,(x+h))\,dx $$

$$ =\;\int_{-1-h}^{1-h}\,f(x)\,(\cos(k\,\pi\,h)\cos(k\,\pi\,x)- \sin(k\,\pi\,h)\sin(k\,\pi\,x))\,dx $$

$$ =\;\cos(k\,\pi\,h)\,\int_{-1-h}^{1-h}\,f(x)\,\cos(k\,\pi\,x)\,dx- \sin(k\,\pi\,h)\int_{-1-h}^{1-h}\,f(x)\,\sin(k\,\pi\,x)\,dx $$

$$ =\;\cos(k\,\pi\,h)\,\int_{-1}^{1}\,f(x)\,\cos(k\,\pi\,x)\,dx- \sin(k\,\pi\,h)\int_{-1}^{1}\,f(x)\,\sin(k\,\pi\,x)\,dx $$

$$ =\;a_k\,\cos(k\,\pi\,h)\,-\, b_k\,\sin(k\,\pi\,h) $$,

where

$$a_k\,=\;\int_{-1}^1\,f(x)\,\cos(k\,\pi\,x)\,dx$$ and $$b_k\,=\;\int_{-1}^1\,f(x)\,\sin(k\,\pi\,x)\,dx$$.

$$ \int_{-1}^1\,f(x-h)\,\sin(k\,\pi\,x)\,dx\; =\;\int_{-1-h}^{1-h}\,f(x)\,\sin(k\,\pi\,(x+h))\,dx $$

$$ =\;\int_{-1-h}^{1-h}\,f(x)\,(\cos(k\,\pi\,h)\sin(k\,\pi\,x)+ \sin(k\,\pi\,h)\cos(k\,\pi\,x))\,dx $$

$$ =\;\cos(k\,\pi\,h)\,\int_{-1-h}^{1-h}\,f(x)\,\sin(k\,\pi\,x)\,dx+ \sin(k\,\pi\,h)\int_{-1-h}^{1-h}\,f(x)\,\cos(k\,\pi\,x)\,dx $$

$$ =\;\cos(k\,\pi\,h)\,\int_{-1}^{1}\,f(x)\,\sin(k\,\pi\,x)\,dx+ \sin(k\,\pi\,h)\int_{-1}^{1}\,f(x)\,\cos(k\,\pi\,x)\,dx $$

$$ =\;a_k\,\sin(k\,\pi\,h)\,+\, b_k\,\cos(k\,\pi\,h) $$.

Comparing the results with ($$) finishes the observation.

Another detail of use is

$$\int_{-1}^{1}\left\vert f(x-h)-(sft)_h\,p(x) \right\vert^2\,dx$$ $$=\;\int_{-1-h}^{1-h}\left\vert f(x)-p(x) \right\vert^2\,dx \;=\;\int_{-1}^{1}\left\vert f(x)-p(x) \right\vert^2\,dx$$

which is

Error Estimation
A result used in error estimation is

When $$s(x)$$  is a sine polynomial

$$s(x)\;=\;\textstyle\sum_{\,k\,=\,1}^{\,m}\,b_k\,\sin(k\,\pi\,x)$$

then

and

Simple Functions
Let $$0\;=\;x_0\;<\;x_1\;<\;x_2\;<\;\ldots\;\;<\;x_{n-1}\;<\;x_n\;=\;1$$

so that

$$ \{\; [x_0\,,\;x_1 ]\,,\;(x_2\,,\;x_2 ]\,,\;\ldots ,\;(x_{n-2}\,,\;x_{n-1} ]\,,\;(x_{n-1}\,,\;x_n ]\;\} $$

is a partition of $$0\;\le\;x\;\le\;1$$.

The function $$f(x)$$  is said to be simple on $$0\;\le\;x\;\le\;1$$,  if

Of particular interest is when the points have equal spacing $$x_i-x_{i-1}\;=\;\tfrac{1}{n}$$.

The intent is to make estimates of $$\int_0^1\,\left\vert f(x)-f(x-h) \right\vert^2\,dx$$.

Begin by making an odd extension of $$f(x)$$  to $$-1\;\le\;x\;\le\;1$$ by setting  $$f(-x)\;=\;-f(x)$$  and continue the definition by extending  $$f(x)$$  periodically.

Then approximate $$f(x)$$  with a sine polynomial

$$s(x)\;=\;\textstyle\sum_{\,k\,=\,1}^{\,m}\,b_k\,\sin(k\,\pi\,x)$$

where

$$b_k\,=\;2\,\int_0^1\,f(x)\,\sin(k\,\pi\,x)\,dx$$.

When $$m$$  is large enough so that some  $$k$$  are divided by  $$2\,n$$ then, for $$k\;=\;2\,n\,r$$

$$\begin{align} b_k\,= & \;2\,\textstyle\sum_{i\,=\,1}^{n}\int_{x_{i-1}}^{x_i}\,f(x)\,\sin(k\,\pi\,x)\,dx \\ = & \;2\,\textstyle\sum_{i\,=\,1}^{n}\,f_{i-1}\int_{(i-1)\tfrac{1}{n}}^{i\,\tfrac{1}{n}}\,\sin(k\,\pi\,x)\,dx \\ \end{align}$$,

and letting $$x\;=\;\tfrac{1}{n\,r}\,u$$,

$$\int_{(i-1)\tfrac{1}{n}}^{\,i\,\tfrac{1}{n}}\,\sin(k\,\pi\,x)\,dx \;=\;\int_{(i-1)\,r}^{\,i\,r}\,\sin(2\,\pi\,u)\,du\;=\;0$$ ,

so that

$$b_k\;=\;b_{\,2\,n\,r}\;=\;0$$.

Now, return to the sum

$$ \lVert s(x)-(sft)_hs(x)\rVert^2\;=\; \textstyle\sum_{\,k\,=\,1}^{\,m}\,2\,\left\vert b_k \right\vert^2\,(1-\cos(k\,\pi\,h)) $$

with $$b_{\,2\,n\,r}\;=\;0$$  for  $$r\;=\;1\,,\;2\,,\;3\,,\ldots$$.

If $$\gcd(\,j,2\,n)\,=\,1$$  and  $$(2\,n)\not\!\backslash\;k$$,  then  $$(2\,n)\not\!\backslash\;k\,j$$,  and in this case

$$\cos(k\,\pi\,j\,\tfrac{1}{n})\;\ne\;0$$ and  $$(1-\cos(k\,\pi\,j\,\tfrac{1}{n}))\;\ge\;(1-\cos(\pi\,\tfrac{1}{n}))$$.

So for $$h\;=\;j\,\tfrac{1}{n}$$   with  $$\gcd(\,j,2\,n)\,=\,1$$ ,

$$ \lVert s(x)-(sft)_hs(x)\rVert^2\;\ge\; \textstyle\sum_{\,k\,=\,1}^{\,m}\,2\,\left\vert b_k \right\vert^2\,(1-\cos(\pi\,\tfrac{1}{n})) $$

Next observe that if $$s(x)$$  is the  $$m\,\text{th}$$  degree sine polynomial approximation to  $$f(x)$$  then  $$(sft)_h\,s(x)$$  is the  $$m\,\text{th}$$  degree trigometric polynomial approximation to  $$f(x-h)$$. The assumption that $$f(x)$$  is odd and periodic is still in effect.

Finally the intended results follow.

$$\lVert s(x)-(sft)_h\,s(x) \rVert$$ $$=\;\lVert (f(x)-f(x-h))+(s(x)-f(x))-((sft)_h\,s(x)-f(x-h))\rVert$$ $$\le\;\lVert f(x)-f(x-h) \rVert+\lVert f(x)-s(x) \rVert+\lVert f(x-h)-(sft)_h\,s(x) \rVert$$ $$=\;\lVert f(x)-f(x-h) \rVert+2\,\lVert f(x)-s(x) \rVert$$

so that

$$\lVert f(x)-f(x-h) \rVert\;\ge\;\lVert s(x)-(sft)_h\,s(x) \rVert-2\,\lVert f(x)-s(x) \rVert$$.

Making use of ($$)

$$\lVert f(x)-f(x-h) \rVert\;\ge\;\sqrt{2\,(1-\cos(\pi\,\tfrac{1}{n}))}\,\lVert s(x) \rVert-2\,\lVert f(x)-s(x) \rVert$$.

Being that simple functions can be approximated by trigometric polynomials to arbitrary accuracy,

Now, for $$h\;=\;\tfrac{1}{n}$$,  and using the definition of the simple function  $$f(x)$$

$$\lVert f(x) \rVert^2\;=\;\frac{2}{n}\textstyle\sum_{\,i\,=\,0}^{\,n-1}\left\vert f_i \right\vert^2$$

To find the sum for $$\lVert f(x)-f(x-h) \rVert^2$$  list tthe values of  $$f(x)$$  over the values of  $$f(x-h)$$  on the whole interval  $$-1\;\le\;x\;\le\;1$$.

$$\begin{align} -f_{n-1} & -f_{n-2} & \ldots & -f_1 & -f_0 & \; & f_0 & \; & f_1 & \; & f_2 & \ldots & f_{n-2} & \; & f_{n-1} \\ f_{n-1} & -f_{n-1} & \ldots & -f_2 & -f_1 & \; & -f_0 & \; & f_0 & \; & f_1 & \ldots & f_{n-3} & \; & f_{n-2} \end{align}$$

This gives

$$\lVert f(x)-f(x-h) \rVert^2\;=\;\frac{2}{n}\big (2\,\left\vert f_0 \right\vert^2+\textstyle\sum_{\,i\,=\,1}^{\,n-1}\left\vert f_i-f_{i-1} \right\vert^2+2\,\left\vert f_{n-1} \right\vert^2\big )$$.

So the inequality follows

Definition of a Vector Norm
The most ordinary kind of vectors are those consisting of ordered n-tuples of real or complex numbers. They may be written in row $$ < \; x \quad y \quad z \; >, $$ $$\begin{bmatrix}x & y & z \\ \end{bmatrix},$$ $$( \; x, \quad y, \quad z \; ), $$ or column $$\begin{bmatrix}x \\ y \\ z \\ \end{bmatrix}$$ forms. Commas or other seperators of components or coordinates may or may not be used. When a vector has many elements, notations like $$\begin{bmatrix}x_1 & x_2 & \cdots & x_n \\ \end{bmatrix}$$  or  $$\begin{bmatrix}x_1 & x_2 & \cdots & x_n \\ \end{bmatrix}^T$$   are often used. A most popular notation to indicate a vector is $$\vec{v}\;=\;$$. Vectors are usually added component-wise, for $$\vec{v}\;=\;$$ and $$\vec{w}\;=\;$$, $$\vec{v}+\vec{w}\;=\;<(v_1+w_1) \;\; (v_2+w_2) \; \cdots \; (v_n+w_n)>$$. Scalar multiplication is defined by $$\alpha\,\vec{v}\;=\;<\alpha\,v_1 \;\, \alpha\,v_2 \; \cdots \; \alpha\,v_n>$$.

A vector norm is a generalization of ordinary absolute value $$\left\vert x \right\vert$$  of a real or complex number.

For $$\vec{u}$$  and  $$\vec{v}$$, vectors, and $$\alpha$$,  a scalar, a vector norm is a real value $$\lVert \cdot \rVert$$  associated with a vector for which the following properties hold.

$$\begin{align} (i) \;\;\; \quad & \lVert \, \vec{v} \, \rVert \; \ge \; 0 \\ (ii) \;\; \quad & \lVert \, \vec{v} \, \rVert \; = \; 0 \iff \; \vec{v} \; = \; 0 \\ (iii) \; \quad & \lVert \, \alpha \, \vec{v} \, \rVert \; = \; \left\vert \alpha \right\vert \, \lVert \, \vec{v} \, \rVert \\ (iv) \;\; \quad & \lVert \, \vec{v}+\vec{w} \, \rVert \;\; \le \;\; \lVert \, \vec{v} \,\rVert \; + \; \lVert \, \vec{w} \, \rVert \; \end{align}$$.

Common Vector Norms
The most commonly used norms are:

$$\begin{align} \quad & \lVert \, \vec{v} \, \rVert_2 \; = \; \sqrt{{\left\vert v_1 \right\vert}^2 +{\left\vert v_2 \right\vert}^2 +\ldots+{\left\vert v_n \right\vert}^2 } \\ \quad & \lVert \, \vec{v} \, \rVert_1 \; = \; \left\vert v_1 \right\vert + \left\vert v_2 \right\vert + \ldots + \left\vert v_n \right\vert \\ \quad & \lVert \, \vec{v} \, \rVert_{\infty} \; = \; \underset{1\,\le\,i\,\le\,n}{\max \left\vert v_i \right\vert} \\ \quad & \lVert \, \vec{v} \, \rVert_p \; = \; (\left\vert v_1 \right\vert^p + \left\vert v_2 \right\vert^p + \ldots + \left\vert v_n \right\vert^p)^{\frac{1}{p}} \\ \end{align}$$.

Any two norms on $$n$$ dimensional vectors of complex numbers are topologically equivalent in the sense that, if $$\lVert \cdot \rVert_a$$ and  $$\lVert \cdot \rVert_b$$  are two different norms, then there exist positive constants  $$c_1$$  and  $$c_2$$  such that  $$c_1\,\lVert \cdot \rVert_a \; \le \; \lVert \cdot \rVert_b \; \le \; c_2\,\lVert \cdot \rVert_a$$.

Inner Product of Vectors
The inner product (or dot product), of two vectors $$\vec{v}\;=\;$$ and $$\vec{w}\;=\;$$, is defined by  $$\vec{v}\,\cdot\,\vec{w}\;=\;v_1\,w_1+v_2\,w_2+ \; \cdots \; +v_n\,w_n$$, or when $$\vec{v}$$  and  $$\vec{w}$$ are complex valued, by

It is often indicated by any one of several notations: $$\vec{v}\,\cdot\,\vec{w}\,,\quad\vec{v}^{\,T}\vec{w}\,,\quad <\vec{v}\,,\;\vec{w}>,$$   or    $$(\vec{v}\,,\;\vec{w})$$.

Besides the dot product, other inner products are defined to be a rule that sssigns to each pair of vectors $$\vec{v}\,,\;\vec{w}$$,  a complex number with the following properties.

$$<\alpha\,\vec{v}\,,\;\vec{w}>\;=\;\alpha\,<\vec{v}\,,\;\vec{w}>$$ $$<\vec{v}\,,\;\vec{w}>\;=\;\overline{<\vec{w}\,,\;\vec{v}>}$$ $$<\vec{v_1}+\vec{v_2}\,,\;\vec{w}>\;=\;<\vec{v_1}\,,\;\vec{w}>+<\vec{v_2}\,,\;\vec{w}>$$

for $$\vec v\;\ne\;0$$,  $$<\vec{v}\,,\;\vec{v}>$$  is real valued and positive

$$<\vec{v}\,,\;\vec{v}>\;>\;0$$

and

$$\left\vert <\vec{v}\,,\;\vec{w}> \right\vert^2\;\le\;<\vec{v}\,,\;\vec{v}><\vec{w}\,,\;\vec{w}>$$

An inner product defines a norm by

Inequalities Involving Norms
The Cauchy Schwarz and Holder's inequalities are commonly employed.

$$\left\vert\,\vec{v}\,\cdot\,\vec{w}\,\right\vert\;\le\;\lVert \, \vec{v} \, \rVert_2\,\lVert \, \vec{w} \, \rVert_2$$

$$\left\vert\,\vec{v}\,\cdot\,\vec{w}\,\right\vert\;\le\;\lVert \, \vec{v} \, \rVert_p\,\lVert \, \vec{w} \, \rVert_q$$   for  $$\tfrac{1}{p}+\tfrac{1}{q}\;=\;1$$

Specialized Norms
There are a number of less well known, but important norms. These norms are important in the analysis of many physical problems and are used in error estimation for finite difference and finite element methods. Examples are the energy and heat norms.

These norms are usually expressed in an integral form.

$$ \lVert y \rVert_E \;=\; \sqrt{\int_{a}^{b}\big (\tfrac{1}{2}\,k\,y^2(t)+\tfrac{1}{2}\,m\,(y^\prime)^2(t)\big )\,dt} $$

$$ \lVert y \rVert_H \;=\; \sqrt{\int_{a}^{b}\big (\,y^2(t)+\,(y^\prime)^2(t)+\,(y^{\prime\prime})^2(t)\,+\ldots+\,(y^{(k)})^2(t)\big )\,dt} $$

When  $$y(a)\;=\;y(b)\;=\;0$$   the inequality below holds.

$$ \sqrt{\int_{a}^{b}\,(y^\prime)^2(t)\,dt}\;\ge\;\frac{\sqrt[3]{\,4\,}}{(b-a)}\, \sqrt{\int_{a}^{b}\,y^2(t)\,dt} $$

This follows from a completely elementary, but lengthy calculation, as shown in appendix a). When additional assumptions on $$y$$  are made this inequality can be improved somewhat.

$$ \sqrt{\int_{a}^{b}\,(y^\prime)^2(t)\,dt}\;\ge\;\frac{\pi}{(b-a)}\, \sqrt{\int_{a}^{b}\,y^2(t)\,dt} $$

See appendix b) for an explanation.

In the analysis of finite difference methods for partial differential equations it is useful to have discrete analogs of norms like the energy and heat norms.

In an attempt not to have notation too cumbersome some indices are suppressed when from the discussion it is clear what they refer to. When $$\vec{v}\;=\;$$ is a $$n$$  dimensional vector of complex numbers, finite difference operators are defined as discrete approximations or analogs of derivatives.

The most appropriate definition of a discrete energy or heat norm may vary due to differences in the handling of initial or boudary conditions. So for this reason the reader should make appropriate adjustments, when needed, to apply the same reasoning to another problem.

Before the discrete versions of energy or heat norms can be defined, finite differece operations need to be defined and explained. This was done in the section on finite difference operators.

The next discrete version of the preceding inequality has important applications to the estimation of the error when approximating a second derivative with a finite difference operator.

If $$v_0\,=\,v_{n+1}\,=\,0$$  then

$$\sqrt{\textstyle\sum_{\,i\,=\,1}^{\,n+1}(v_i-v_{i-1})^2}\ge\,\frac{\beta_n}{n+1}\,\sqrt{\textstyle\sum_{\,i\,=\,1}^{\,n}\,v_i^2} $$

with

$$ \beta_1\,=\,2\,\sqrt 2\,,\,\,\,\beta_2\,=\,3,\; $$

and generally the following under-estimate holds.

$$ \beta_n\;\ge\;\sqrt 2\,\sqrt{\tfrac{n+1}{n+3}} $$

See appendix c) for a proof.

The inequality can be improved for general $$n$$ by using ($$).

$$2\,\left\vert f_0 \right\vert^2+\textstyle\sum_{\,i\,=\,1}^{\,n-1}\left\vert f_i-f_{i-1} \right\vert^2+2\,\left\vert f_{n-1} \right\vert^2\;\ge\;2\,(1-\cos(\pi\,\tfrac{1}{n})) \textstyle\sum_{\,i\,=\,0}^{\,n-1}\left\vert f_i \right\vert^2 $$

If $$v_0\,=\,v_{n+1}\,=\,0$$  then

$$\sqrt{\textstyle\sum_{\,i\,=\,1}^{\,n+1}(v_i-v_{i-1})^2}\ge\,\sqrt{2\,(1-\cos(\pi\,\tfrac{1}{n+2}))}\,\sqrt{\textstyle\sum_{\,i\,=\,1}^{\,n}\,v_i^2} $$

and

$$\sqrt{2\,(1-\cos(\pi\,\tfrac{1}{n+2}))}\;\to\;\frac{\pi}{n+2}$$.

appendix a)
When  $$y(a)\;=\;y(b)\;=\;0$$   the inequality below holds.

$$ \sqrt{\int_{a}^{b}\,(y^\prime)^2(t)\,dt}\;\ge\;\frac{\sqrt[3]{\,4\,}}{(b-a)}\, \sqrt{\int_{a}^{b}\,y^2(t)\,dt} $$

First apply the Cauchy Schwarz inequality.

$$\begin{align} \tfrac{1}{2}\,y^2(x)\;= & \;\int_{a}^{x}\,y(t)\,y^\prime (t)\,dt\;\le\; \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} \\ \end{align}$$,

Next observe the integrals on the right are increasing with $$x$$.

$$\begin{align} \tfrac{1}{2}\,y^2(u)\; & \;\le\; \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} \,, \quad \text{for} \;\; a \; \le \; u \; \le \; x \\ \end{align}$$

Integrate, make cancellations, and reapply the first inequality.

$$ \frac{1}{2}\,\int_{a}^{x}\,y^2(t)\,dt\;\le\;(x-a) \;\sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} $$ . $$ \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\le\;2\,(x-a) \;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} $$ . $$ y^2(x)\;\le\;4\,(x-a)\,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt $$.

After integrating again the inequality is improved.

$$\begin{align} \int_{a}^{x}\,y^2(t)\,dt\;\le & \;4\,\int_{a}^{x}(t-a)\,dt \,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt \\ = & \;2\,(x-a)^2\,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt \\ \end{align}$$.

$$ \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\; \le \;\sqrt{\,2}\,(x-a)\,\sqrt{\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt} $$ . Now, assume the inequality immediately below holds for some  $$\alpha$$. $$ \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\; \le \;\alpha\,(x-a)\,\sqrt{\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt} $$

After substituting the inequality above into the next

$$ \frac{1}{2}\,\int_{a}^{x}\,y^2(t)\,dt\;\le\;(x-a) \;\sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} $$.

the following observations are made.

$$ \int_{a}^{x}\,y^2(t)\,dt\;\le\;2\,\alpha\,(x-a)^2 \;\int_{a}^{x}\,(y^\prime)^2(t)\,dt $$.

$$ \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\le\;\sqrt{2\,\alpha}\,(x-a) \;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} $$.

$$ y^2(x)\;\le\;2\,\sqrt{2\,\alpha}\,(x-a)\,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt $$.

$$\begin{align} \int_{a}^{x}\,y^2(t)\,dt\;\le & \;2\,\sqrt{2\,\alpha}\,\int_{a}^{x}(t-a)\,dt\,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt \\ = & \;\sqrt{2\,\alpha}\,(x-a)^2\,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt \\ \end{align}$$. $$ \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\;\le\;\sqrt[4]{2\,\alpha}\,(x-a) \;\sqrt{\int_{a}^{x}\,(y^\prime)^2(t)\,dt} $$.

Repeating this iteration leads to a sequence $$\alpha_{n+1}\;=\;\sqrt[4]{2\,\alpha_n}$$ which converges to $$\alpha\;=\;\sqrt[3]{\,2}$$,  the solution of  $$\alpha\;=\;\sqrt[4]{2\,\alpha}$$. So:

$$ \sqrt{\int_{a}^{x}\,y^2(t)\,dt}\; \le \;\sqrt[3]{\,2}\,(x-a)\,\sqrt{\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt} $$ and $$ y^2(x)\;\le\;2\,\sqrt{2\,\sqrt[3]{\,2}}\,(x-a)\,\int_{a}^{x}\,(y^{\,\prime})^2(t)\,dt $$.

For $$c = (a+b)\,/\,2$$ ,

$$\begin{align} \int_{a}^{c}\,y^2(t)\,dt\;\le & \;2\,\sqrt{2\,\sqrt[3]{\,2}}\,\int_{a}^{c}(t-a)\,dt \,\int_{a}^{c}\,(y^{\,\prime})^2(t)\,dt \\ = & \;\sqrt{2\,\sqrt[3]{\,2}}\,(c-a)^2\,\int_{a}^{c}\,(y^{\,\prime})^2(t)\,dt \\ = & \;\frac{\sqrt{2\,\sqrt[3]{\,2}}}{4}\,(b-a)^2\,\int_{a}^{c}\,(y^{\,\prime})^2(t)\,dt \\ \end{align}$$.

$$ \int_{a}^{c}\,y^2(t)\,dt\;\le\;\frac{\sqrt[3]{\,4}}{4}\, (b-a)^2\,\int_{a}^{c}\,(y^{\,\prime})^2(t)\,dt $$.

To handle the part $$\int_{c}^{b}\,y^2(t)\,dt$$

$$\begin{align} \tfrac{1}{2}\,y^2(x)\;= & \;\int_{x}^{b}\,-y(t)\,y^\prime (t)\,dt\;\le\; \sqrt{\int_{x}^{b}\,y^2(t)\,dt}\;\sqrt{\int_{x}^{b}\,(y^\prime)^2(t)\,dt} \\ \end{align}$$, $$\begin{align} \tfrac{1}{2}\,y^2(u)\; & \;\le\; \sqrt{\int_{x}^{b}\,y^2(t)\,dt}\;\sqrt{\int_{x}^{b}\,(y^\prime)^2(t)\,dt} \,, \quad \text{for} \;\; x \; \le \; u \; \le \; b \\ \end{align}$$ $$ \frac{1}{2}\,\int_{x}^{b}\,y^2(t)\,dt\;\le\;(b-x) \;\sqrt{\int_{x}^{b}\,y^2(t)\,dt}\;\sqrt{\int_{x}^{b}\,(y^\prime)^2(t)\,dt} $$ . $$ \sqrt{\int_{x}^{b}\,y^2(t)\,dt}\;\le\;2\,(b-x) \;\sqrt{\int_{x}^{b}\,(y^\prime)^2(t)\,dt} $$ . $$ y^2(x)\;\le\;4\,(b-x)\,\int_{x}^{b}\,(y^{\,\prime})^2(t)\,dt $$.

Reasoning as before this inequality can be strengthend to

$$ y^2(x)\;\le\;2\,\sqrt{2\,\sqrt[3]{\,2}}\,(b-x)\,\int_{x}^{b}\,(y^{\,\prime})^2(t)\,dt $$.

$$\begin{align} \int_{c}^{b}\,y^2(t)\,dt\;\le & \;2\,\sqrt{2\,\sqrt[3]{\,2}}\,\int_{c}^{b}(b-t)\,dt \,\int_{c}^{b}\,(y^{\,\prime})^2(t)\,dt \\ = & \;\sqrt{2\,\sqrt[3]{\,2}}\,(b-c)^2\,\int_{c}^{b}\,(y^{\,\prime})^2(t)\,dt \\ = & \;\frac{\sqrt{2\,\sqrt[3]{\,2}}}{4}\,(b-a)^2\,\int_{c}^{b}\,(y^{\,\prime})^2(t)\,dt \\ \end{align}$$. $$ \int_{c}^{b}\,y^2(t)\,dt\;\le\;\frac{\sqrt[3]{\,4}}{4}\, (b-a)^2\,\int_{c}^{b}\,(y^{\,\prime})^2(t)\,dt $$.

Adding the results of the two main calculations

$$\begin{align} & \int_{a}^{b}\,y^2(t)\,dt\;= \;\int_{a}^{c}\,y^2(t)\,dt\;+\; \int_{c}^{b}\,y^2(t)\,dt \\ \le & \;\frac{\sqrt[3]{\,4}}{4}\,(b-a)^2\,\int_{a}^{c}\,(y^{\,\prime})^2(t)\,dt + \;\frac{\sqrt[3]{\,4}}{4}\,(b-a)^2\,\int_{c}^{b}\,(y^{\,\prime})^2(t)\,dt \\ = & \;\frac{\sqrt[3]{\,4}}{4}\,(b-a)^2\,\int_{a}^{b}\,(y^{\,\prime})^2(t)\,dt \\ \end{align}$$.

appendix b)
When  $$y(a)\;=\;y(b)\;=\;0$$   the inequality below holds.

$$ \sqrt{\int_{a}^{b}\,(y^\prime)^2(t)\,dt}\;\ge\;\frac{\pi}{(b-a)}\, \sqrt{\int_{a}^{b}\,y^2(t)\,dt} $$

Assume the conditions that $$y(x)$$ can be approximated by a trigometric polynomial

$$\begin{align} s(x)\;= & \;a_1\,\sin{(\frac{\pi}{(b-a)}\,x)}+a_2\,\sin{(\frac{\pi}{(b-a)}\,2\,x)}+ a_3\,\sin{(\frac{\pi}{(b-a)}\,3\,x)} \\ & +\;\ldots\;+a_n\,\sin{(\frac{\pi}{(b-a)}\,n\,x)} \\ \end{align}$$.

such that $$y^\prime (x)$$  is also approximated by the polynomials derivative

$$\begin{align} & s^\prime (x)\;= \;a_1\,\frac{\pi}{(b-a)}\,\cos{(\frac{\pi}{(b-a)}\,x)}+2\,a_2\,\frac{\pi}{(b-a)}\,\cos{(\frac{\pi}{(b-a)}\,2\,x)} \\ & +3\,a_3\,\frac{\pi}{(b-a)}\,\cos{(\frac{\pi}{(b-a)}\,3\,x)}+\;\ldots\;+n\,a_n\,\frac{\pi}{(b-a)}\,\cos{(\frac{\pi}{(b-a)}\,n\,x)} \\ \end{align}$$

That is to say, given $$\epsilon\;>\;0$$,  there exixts a trigometric sine polynomial $$s(x)$$,  such that

$$ \int_{a}^{b} {\left\vert f(x)-s(x) \right\vert}^2\,dx\;<\;\epsilon $$ and $$ \int_{a}^{b} {\left\vert f^\prime (x)-s^\prime (x) \right\vert}^2\,dx\;<\;\epsilon $$.

Now,

$$ \frac{2}{b-a}\,\int_{a}^{b}{\left\vert s(x) \right\vert}^2\,dx\;=\;{\left\vert a_1 \right\vert}^2 +{\left\vert a_2 \right\vert}^2+{\left\vert a_3 \right\vert}^2+\;\ldots\; +{\left\vert a_n \right\vert}^2 $$   and $$\begin{align} \frac{2}{b-a}\,\int_{a}^{b}{\left\vert s^\prime (x) \right\vert}^2\,dx\; & =\;{\frac{\pi^2}{(b-a)^2}}\,{\left\vert a_1 \right\vert}^2 +2^2\,{\frac{\pi^2}{(b-a)^2}}\,{\left\vert a_2 \right\vert}^2+3^2\,{\frac{\pi^2}{(b-a)^2}}\,{\left\vert a_3 \right\vert}^2 \\ & +\;\ldots\;+n^2\,{\frac{\pi^2}{(b-a)^2}}\,{\left\vert a_n \right\vert}^2 \\ \end{align}$$.

So it is easy to see that

$$ \int_{a}^{b}{\left\vert s^\prime (x) \right\vert}^2\,dx\;\ge\; {\frac{\pi^2}{(b-a)^2}}\,\int_{a}^{b}{\left\vert s(x) \right\vert}^2\,dx $$.

In this case the inequality is sharp, since it holds with equality for $$f(x)\;=\;\sin{(\frac{\pi}{(b-a)}\,x)}$$.

Since

$$ \sqrt{\int_{a}^{b}{\left\vert s(x) \right\vert}^2\,dx} - \sqrt \epsilon \;<\; \sqrt{\int_{a}^{b}{\left\vert f(x) \right\vert}^2\,dx} \;<\; \sqrt{\int_{a}^{b}{\left\vert s(x) \right\vert}^2\,dx} + \sqrt \epsilon $$, and $$ \sqrt{\int_{a}^{b}{\left\vert s^\prime (x) \right\vert}^2\,dx} - \sqrt \epsilon \;<\; \sqrt{\int_{a}^{b}{\left\vert f^\prime (x) \right\vert}^2\,dx} \;<\; \sqrt{\int_{a}^{b}{\left\vert s^\prime (x) \right\vert}^2\,dx} \; + \sqrt \epsilon $$,

the inequality holds for $$y(x)$$  and  $$y^\prime (x)$$.

appendix c)
If $$v_0\,=\,v_{n+1}\,=\,0$$  then

$$\sqrt{\textstyle\sum_{\,i\,=\,1}^{\,n+1}(v_i-v_{i-1})^2}\ge\,\frac{\beta_n}{n+1}\,\sqrt{\textstyle\sum_{\,i\,=\,1}^{\,n}\,v_i^2} $$

with

$$ \beta_1\,=\,2\,\sqrt 2\,,\,\,\,\beta_2\,=\,3,\; $$

and generally the following under-estimate holds.

$$ \beta_n\;\ge\;\sqrt 2\,\sqrt{\tfrac{n+1}{n+3}} $$

The cases for 1 and 2 are simple.

$$(v_1-v_0)^2+(v_2-v_1)^2\;=\;2\,v_1^2$$

and

$$\begin{align} (v_1-v_0)^2+(v_2-v_1)^2+(v_2-v_1)^2\;= & \;v_1^2+(v_2-v_1)^2+v_2^2 \\ \;\ge & \;v_1^2+v_2^2 \\ \end{align}$$

with equality, when $$v_1\;=\;v_2$$.

To prove the general under-estimate do as follows.

Use $$v_0\;=\;0$$  and apply the Cauchy Schwartz inequality together with inequality ().

$$v_k^2\,=\,\textstyle\sum_{i\,=\,1}^k(v_i^2-v_{i-1}^2) \,=\,\textstyle\sum_{i\,=\,1}^k(v_i+v_{i-1})(v_i-v_{i-1}) $$

$$\le\,\sqrt{\textstyle\sum_{i\,=\,1}^k(v_i+v_{i-1})^2} \,\sqrt{\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2} $$

So for $$i\;\le\;k$$,

$$v_i^2\le\,2\,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}\,v_j^2\;} \,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}(v_j-v_{j-1})^2} $$

and

$$\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,v_i^2}\le\,2\,k \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}(v_i-v_{i-1})^2} $$.

After inserting the inequality above into ($$)

$$v_k^2\;\le\,4\,k\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$.

So for $$i\;\le\;k$$,

$$v_i^2\;\le\,4\,i\,\textstyle\sum_{j\,=\,1}^i(v_j-v_{j-1})^2 $$

$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;\le\,4\,(\textstyle\sum_{i\,=\,1}^{k}\,i)\,\textstyle\sum_{j\,=\,1}^k(v_j-v_{j-1})^2 $$

and after using formula

$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;\le\,2\,\,k(k+1)\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$.

Using $$v_{n+1}\;=\;0$$  the right end of the sum is estimated using nearly the same procedure.

$$v_k^2\,=\,\textstyle\sum_{i\,=\,k}^n(v_i^2-v_{i+1}^2) \,=\,\textstyle\sum_{i\,=\,k}^n(v_i+v_{i+1})(v_i-v_{i+1}) $$

$$\le\,\sqrt{\textstyle\sum_{\,i\,=\,k+1}^{n+1}(v_i+v_{i-1})^2} \,\sqrt{\textstyle\sum_{\,i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2} $$

$$\le\,2\,\sqrt{\textstyle\sum_{\,i\,=\,k}^n\,v_i^2\;} \,\sqrt{\textstyle\sum_{\,i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2} $$

So for $$i\;\ge\;k$$,

$$v_i^2\le\,2\,\sqrt{\textstyle\sum_{j\,=\,i}^{\,n}\,v_j^2\;} \,\sqrt{\textstyle\sum_{j\,=\,i+1}^{\,n+1}(v_j-v_{j-1})^2} $$,

$$\sqrt{\textstyle\sum_{i\,=\,k}^{n}\,v_i^2}\le\,2\,(n-k+1) \,\sqrt{\textstyle\sum_{i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2} $$,

and

$$v_k^2\;\le\,4\,(n-k+1)\,\textstyle\sum_{i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2 $$.

So for $$i\;\ge\;k$$,

$$v_i^2\;\le\,4\,(n-i+1)\,\textstyle\sum_{j\,=\,i+1}^{n+1}(v_j-v_{j-1})^2 $$,

$$\textstyle\sum_{i\,=\,k}^{n}\,v_i^2\;\le\,4\,((n+1)(n-k+1)-(\textstyle\sum_{i\,=\,k}^{n}\,i))\,\textstyle\sum_{i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2 $$

$$=\,2\,\,(2\,(n+1)(n-k+1)-(n(n+1)-(k-1)k))\,\textstyle\sum_{i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2 $$

$$=\,2\,\,((n-2\,k+3)n+(k-1)(k-2))\,\textstyle\sum_{i\,=\,k+1}^{n+1}(v_i-v_{i-1})^2 $$.

Combining the two results:

$$\textstyle\sum_{i\,=\,1}^{n}\,v_i^2\;=\; \textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;+\;\textstyle\sum_{i\,=\,k+1}^{n}\,v_i^2 $$

$$\le\,2\,\,k(k+1)\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$

$$+\,2\,\,((n-2\,k+1)n+k(k-1))\,\textstyle\sum_{i\,=\,k+2}^{n+1}(v_i-v_{i-1})^2 $$.

When $$n$$  is odd, use $$k\;=\;(n+1)\,/\,2$$ so that the inequality becomes

$$\le\,2\,\,\frac{n+1}{2}\frac{n+3}{2}\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$

$$+\,2\,\,(\frac{n+1}{2}\frac{n-1}{2})\,\textstyle\sum_{i\,=\,k+2}^{n+1}(v_i-v_{i-1})^2 $$

$$\le\,\frac{(n+1)(n+3)}{2}\,\textstyle\sum_{i\,=\,1}^{n+1}(v_i-v_{i-1})^2 $$.

When $$n$$  is even, use $$k\;=\;n\,/\,2$$ so that the inequality becomes

$$\le\,2\,\,\frac{n}{2}\frac{n+2}{2}\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$

$$+\,2\,\,(\frac{n}{2}\frac{n+2}{2})\,\textstyle\sum_{i\,=\,k+2}^{n+1}(v_i-v_{i-1})^2 $$

$$\le\,\frac{(n)(n+2)}{2}\,\textstyle\sum_{i\,=\,1}^{n+1}(v_i-v_{i-1})^2 $$.

Definition of a Matrix Norm
The most ordinary kind of matices are those consisting of rectangular arrays of real or complex numbers. They may be written in element form $$A\;=\;(a_{i\,j}),\;1\;\le\;m \,,\;\;1\;\le\;n$$ and be considered as collections of column or  row  vectors.

Matrices are usually added element-wise, for $$A\;=\;(a_{i\,j}),\;B\;=\;(b_{i\,j}),\quad\;A+B\;=\;(a_{i\,j}+b_{i\,j})$$ Scalar multiplication is defined by $$\alpha\,A\;=\;(\alpha\,a_{i\,j})$$.

The notation $$A\;=\;0$$  means that all elements of $$A$$  are identically zero.

A matrix norm is a generalization of ordinary absolute value $$\left\vert x \right\vert$$  of a real or complex number, and can be considered a type of a vector norm.

For $$A$$  and  $$B$$, matrices, and $$\alpha$$,  a scalar, a matrix norm is a real value $$\lVert \cdot \rVert$$  associated with a matrix for which the following properties hold.

$$\begin{align} (i) \;\;\; \quad & \lVert \, A \, \rVert \; \ge \; 0 \\ (ii) \;\; \quad & \lVert \, A \, \rVert \; = \; 0 \iff \; A \; = \; 0 \\ (iii) \; \quad & \lVert \, \alpha \, A \, \rVert \; = \; \left\vert \alpha \right\vert \, \lVert \, A \, \rVert \\ (iv) \;\; \quad & \lVert \, A+B \, \rVert \;\; \le \;\; \lVert \, A \,\rVert \; + \; \lVert \, B \, \rVert \; \end{align}$$.

Common Matrix Norms
The most commonly used norms are:

$$\begin{align} \quad & \lVert \, A \, \rVert_F \; = \; \sqrt{\textstyle\sum_{i\,=\,1}^m\textstyle\sum_{j\,=\,1}^n{\left\vert a_{i\,j} \right\vert}^2} \\ \quad & \lVert \, A \, \rVert_{\infty} \; = \; \underset{1\;\le\;i\;\le\;m}{\max \;}(\left\vert a_{i\,1} \right\vert + \left\vert a_{i\,2} \right\vert + \ldots + \left\vert a_{i\,n} \right\vert) \\ \quad & \lVert \, A \, \rVert_1 \; = \; \underset{1\;\le\;j\;\le\;n}{\max \;}(\left\vert a_{1\,j} \right\vert + \left\vert a_{2\,j} \right\vert + \ldots + \left\vert a_{m\,j} \right\vert) \\ \quad & \lVert \, A \, \rVert_{\max} \; = \; \underset{1\,\le\,i\,\le\,m,\;1\,\le\,j\,\le\,n}{\max {\{\left\vert a_{i\,j} \right\vert\}}} \\ \quad & \lVert \, A \, \rVert_p \; = \; (\textstyle\sum_{i\,=\,1}^m\textstyle\sum_{j\,=\,1}^n\left\vert a_{i\,j} \right\vert^p)^{\frac{1}{p}} \\ \quad & \lVert \, A \, \rVert_2 \; = \; \underset{\lVert \, x \, \rVert_2\;=\;1}{\max} \;\lVert \, A\,x \, \rVert_2 \\ \end{align}$$.

Like vector norms any two matrix norms on $$m\times n$$ matrices of complex numbers are topologically equivalent in the sense that, if $$\lVert \cdot \rVert_a$$ and  $$\lVert \cdot \rVert_b$$  are two different norms, then there exist positive constants  $$c_1$$  and  $$c_2$$  such that  $$c_1\,\lVert \cdot \rVert_a \; \le \; \lVert \cdot \rVert_b \; \le \; c_2\,\lVert \cdot \rVert_a$$.

The norm $$\lVert \, A \, \rVert_2$$  on matrices  $$A$$  is an example of an induced norm. An induced norm is for a vector norm $$\lVert \, x \, \rVert_b$$ defined by

$$\lVert \, A \, \rVert_b \; = \; \underset{\lVert \, x \, \rVert_b\;=\;1}{\max} \;\lVert \, A\,x \, \rVert_b$$.

This could cause the same subscript notation to be used fot two different norms, sometimes.

Positive definite matrices
A $$n\times n$$  matrix  $$A$$  is said to be  positive definite if for any vector  $$x$$  $$x^TA\,x\;\ge\;\alpha\,x^Tx$$ for some positive constant $$\alpha$$ , not depending on $$x$$.

 The property of being positive definite insures the numerical stability of a variety of common numerical techniques used to solve the equation $$A\,x\;=\;y$$.

Taking $$x\;=\;A^{-1}y$$  then

$$(A^{-1}y)^TA\,(A^{-1}y)\;\ge\;\alpha\,(A^{-1}y)^TA^{-1}y$$.

so that

$$(A^{-1}y)^Ty\;\ge\;\alpha\,\lVert A^{-1}y \rVert_2^2$$ and $$\alpha\,\lVert A^{-1}y \rVert_2^2\;\le\;\lVert A^{-1}y \rVert_2\,\lVert y \rVert_2$$ .

This gives

$$\alpha\,\lVert A^{-1}y \rVert_2\;\le\;\lVert y \rVert_2$$

and

$$\lVert A^{-1} \rVert_2\;\le\;\alpha^{-1}$$.

Consistency of Norms
A matrix norm $$\lVert \,\cdot\, \rVert_M$$  and a vector norm  $$\lVert \,\cdot\, \rVert_V$$  are said to be consistent when

$$\lVert \,A\,x\, \rVert_M\;\le\;\lVert \,A\, \rVert_M\,\lVert \,x\, \rVert_V$$.

When $$\lVert \,\cdot\, \rVert_M$$  is the matrix norm induced by the vector norm  $$\lVert \,\cdot\, \rVert_V$$  then the two norms will be consistent.

When the two norms are not consistent there will still be a positive constant $$\alpha$$  such that

$$\lVert \,A\,x\, \rVert_M\;\le\;\alpha\,\lVert \,A\, \rVert_M\,\lVert \,x\, \rVert_V$$.

Statement of the Problem
Let D be the rectangle

$$D = \{(x, y): 0 < x < a,\ 0 < y < b\}$$

and let C be the boundary of D.

The operator

$$\Delta u(x, y) = {\partial^2\over\partial x^2}u(x, y) + {\partial^2\over\partial y^2}u(x, y)$$

is the usual Laplacian. The problem, determine a function u(x, y) such that

$$ \begin{cases} \begin{align} -\Delta u(x, y) & = f(x, y),\ \text{for}\ (x, y) \isin D \\ \\ u(x, y) & = g(x, y),\ \text{for}\ (x, y) \isin C, \\ \end{align} \end{cases} \quad _{(1.0)}$$

is called a Poisson problem.

discrete approximation
To approximate u(x, y) numerically, use the grid

$$(x_i, y_j),\ 0 \le i \le m+1,\ 0 \le j \le n+1$$. with $$x_i = i\,h,\,y_i = j\ k$$ and $$h = a\,/\,(m+1), k = b\,/\,(n+1)$$

The second partial derivative $${\partial^2\over\partial x^2}\ u(x, y)$$ can be approximated on the grid by difference quotients $$({\partial_h^2\over\partial_h x^2})\ u(x_i, y_j)$$. These difference quotients are given by

$$({\partial_h^2\over\partial_h x^2})\ u(x_1, y_j)\ =$$. $${-\frac{1}{12}\,u(x_1+3\,h, y_j)+\frac{1}{3}\,u(x_1+2\,h, y_j) +\frac{1}{2}\,u(x_1+h, y_j)-\frac{5}{3}\,u(x_1, y_j) +\frac{11}{12}\,u(x_1-h, y_j)\over\ h^2}$$

$$({\partial_h^2\over\partial_h x^2})\ u(x_i, y_j)\ =$$. $${-\frac{1}{12}\,u(x_i+2\,h, y_j)+\frac{4}{3}\,u(x_i+h, y_j) -\frac{5}{2}\,u(x_i, y_j)+\frac{4}{3}\,u(x_i-h, y_j) -\frac{1}{12}\,u(x_i-2\,h, y_j)\over\ h^2}$$

$$\text{for}\ i\,=\,2,\ 3,\ \ldots\, m-1 \quad \text{and}$$

$$({\partial_h^2\over\partial_h x^2})\ u(x_m, y_j)\ =$$. $${\frac{11}{12}\,u(x_m+h, y_j)-\frac{5}{3}\,u(x_m, y_j) +\frac{1}{2}\,u(x_m-h, y_j)+\frac{1}{3}\,u(x_m-2\,h, y_j) -\frac{1}{12}\,u(x_m-3\,h, y_j)\over\ h^2}$$

The second partial derivative $${\partial^2\over\partial y^2}\ u(x, y)$$ can be approximated on the grid by difference quotients $$({\partial_k^2\over\partial_k y^2})\ u(x_i, y_j)$$. These difference quotients are given by

$$({\partial_k^2\over\partial_k y^2})\ u(x_i, y_1)\ =$$. $${-\frac{1}{12}\,u(x_i, y_1+3\,k)+\frac{1}{3}\,u(x_i, y_1+2\,k) +\frac{1}{2}\,u(x_i, y_1+k)-\frac{5}{3}\,u(x_i, y_1) +\frac{11}{12}\,u(x_i, y_1-k)\over\ k^2}$$

$$({\partial_k^2\over\partial_k y^2})\ u(x_i, y_j)\ =$$. $${-\frac{1}{12}\,u(x_i, y_j+2\,k)+\frac{4}{3}\,u(x_i, y_j+k) -\frac{5}{2}\,u(x_i, y_j)+\frac{4}{3}\,u(x_i, y_j-k) -\frac{1}{12}\,u(x_i, y_j-2\,k)\over\ k^2}$$

$$\text{for}\ j\,=\,2,\ 3,\ \ldots\, n-1 \quad \text{and}$$

$$({\partial_k^2\over\partial_k y^2})\ u(x_i, y_n)\ =$$. $${\frac{11}{12}\,u(x_i, y_n+k)-\frac{5}{3}\,u(x_i, y_n) +\frac{1}{2}\,u(x_i, y_n-k)+\frac{1}{3}\,u(x_i, y_n-2\,k) -\frac{1}{12}\,u(x_i, y_n-3\,k)\over\ k^2}$$

truncation errors
The difference quotients $$({\partial_h^2\over\partial_h x^2})\ u(x_i, y_j)$$ are third order accurate with truncation errors:

$$\begin{align} \tau_x(x_i, y_j) & = ({\partial_h^2\over\partial_h x^2})\ u(x_i, y_j) - {\partial^2\over\partial x^2}\ u(x_i, y_j) \\ & = {h^3\over\ 120}\, M{_x^{(5)}}(x_i, y_j) \end{align}$$

with

$$M{_x^{(5)}}(x_1, y_j) = \frac{67}{6}\,{\partial^5\over\partial x^5} \,u(x_1+\phi{_1^{(1,\,j)}}\,h,\ y_j)-\frac{254}{12}\,{\partial^5\over\partial x^5} \,u(x_1+\phi{_2^{(1,\,j)}}\,h,\ y_j)$$ ,

for some   $$0 < \phi{_1^{(1,\,j)}} < 2\,, \quad -1 < \phi{_2^{(1,\,j)}} < 3$$ ,

$$M{_x^{(5)}}(x_i, y_j) = 4\,{\partial^5\over\partial x^5} \,u(x_i+\phi{_1^{(i,\,j)}}\,h,\ y_j)-4\,{\partial^5\over\partial x^5} \,u(x_i+\phi{_2^{(i,\,j)}}\,h,\ y_j)$$ ,

for some   $$-2 < \phi{_1^{(i,\,j)}} < 1\,, \quad -1 < \phi{_2^{(i,\,j)}} < 2\,,$$ $$\text{for}\ \ i\ =\ 2,\ 3,\ \ldots\,,\ m-1.$$

When $${\partial^6\over\partial x^6}\,u(x,\,y)$$   is continuous, these estimates also hold. $$\tau_x(x_i,\,y_j)\;=\;{h^4\over\ 720}M_x^6(x_i,\,y_j),$$ with $$ M_x^6(x_i,\,y_j)\;=\;\frac{8}{3}{\partial^6\over\partial x^6} u(x_i+\phi{_1^{(i,\,j)}}\,h,\;y_j) - \frac{32}{3}{\partial^6\over\partial x^6} u(x_i+\phi{_2^{(i,\,j)}}\,h,\;y_j) $$ for some    $$-1\;<\;\phi{_1^{(i,\,j)}}\;<\;1\,,\quad -2\;<\;\phi{_2^{(i,\,j)}}\;<\;2,$$ $$\text{for}\ \ i\ =\ 2,\ 3,\ \ldots\,,\ m-1.$$ The case for $$i\;=\;m$$  is

$$M{_x^{(5)}}(x_m, y_j) = \frac{254}{12}\,{\partial^5\over\partial x^5} \,u(x_m+\phi{_1^{(m,\,j)}}\,h,\ y_j)-\frac{67}{6}\,{\partial^5\over\partial x^5} \,u(x_m+\phi{_2^{(m,\,j)}}\,h,\ y_j)$$ ,

for some   $$-3 < \phi{_1^{(m,\,j)}} < 1\,, \quad -2 < \phi{_2^{(m,\,j)}} < 0$$.

The difference quotients $$({\partial_k^2\over\partial_k y^2})\ u(x_i, y_j)$$ are third order accurate with truncation errors:

$$\begin{align} \tau_y(x_i, y_i) & = ({\partial_k^2\over\partial_k y^2})\ u(x_i, y_1) - {\partial^2\over\partial y^2}\ u(x_i, y_1) \\ & = {k^3\over\ 120}\, M{_y^{(5)}}(x_i, y_1) \end{align}$$

with

$$M{_y^{(5)}}(x_i, y_1) = \frac{67}{6}\,{\partial^5\over\partial y^5} \,u(x_i,\ y_1+\psi{_1^{(i,\,1)}}\,k)-\frac{254}{12}\,{\partial^5\over\partial y^5}\,u(x_i,\ y_1+\psi{_2^{(i,\,1)}}\,k)$$ ,

for some   $$0 < \psi{_1^{(i,\,1)}} < 2\,, \quad -1 < \psi{_2^{(i,\,1)}} < 3$$ ,

$$M{_y^{(5)}}(x_i, y_j) = 4\,{\partial^5\over\partial y^5} \,u(x_i,\ y_j+\psi{_1^{(i,\,j)}}\,k)-4\,{\partial^5\over\partial y^5} \,u(x_i,\ y_j+\psi{_2^{(i,\,j)}}\,k)$$ ,

for some   $$-2 < \psi{_1^{(i,\,j)}} < 1\,, \quad -1 < \psi{_2^{(i,\,j)}} < 2\,,$$ $$\text{for}\ \ j\ =\ 2,\ 3,\ \ldots\,,\ n-1.$$

When $${\partial^6\over\partial y^6}\,u(x,\,y)$$   is continuous, these estimates also hold. $$\tau_y(x_i,\,y_j)\;=\;{k^4\over\ 720}M_y^6(x_i,\,y_j),$$ with $$ M_y^6(x_i,\,y_j)\;=\;\frac{8}{3}{\partial^6\over\partial y^6} u(x_i+\psi{_1^{(i,\,j)}}\,k,\;y_j) - \frac{32}{3}{\partial^6\over\partial y^6} u(x_i+\psi{_2^{(i,\,j)}}\,k,\;y_j) $$ for some    $$-1\;<\;\psi{_1^{(i,\,j)}}\;<\;1\,,\quad -2\;<\;\psi{_2^{(i,\,j)}}\;<\;2,$$ $$\text{for}\ \ j\ =\ 2,\ 3,\ \ldots\,,\ n-1.$$ The case for $$j\;=\;n$$  is

$$M{_y^{(5)}}(x_i, y_n) = \frac{254}{12}\,{\partial^5\over\partial y^5} \,u(x_i,\ y_n+\psi{_1^{(i,\,n)}}\,k)-\frac{67}{6}\,{\partial^5\over\partial y^5}\,u(x_i,\ y_n+\psi{_2^{(i,\,n)}}\,k)$$ ,

for some   $$-3 < \psi{_1^{(i,\,n)}} < 1\,, \quad -2 < \psi{_2^{(i,\,n)}} < 0$$.

The Laplacian $$\Delta u(x, y)$$  then can be approximated on the interior of the grid by

$$\Delta_{h,\,k} u(x_i, y_j) = {\partial_h^2\over\partial_h x^2}u(x_i, y_j) + {\partial_k^2\over\partial_k y^2}u(x_i, y_j)$$

The truncation error $$\tau (x_i, y_j) = \Delta_{h,\,k} u(x_i, y_j)-\Delta u(x_i, y_j)$$ is given by $$\tau (x_i, y_j) = \tau_x (x_i, y_j)+\tau_y (x_i, y_j)$$.

finite difference operations defined
For the grid vector $$U = \{U_{i,\,j}\} \quad 0 \le i \le m+1, \ 0 \le j \le n+1$$ define the finite difference operations $$\Delta_{i,\,j} = ({\partial_h^2\over\partial_h x^2})_{i,\,j}\,U + ({\partial_k^2\over\partial_k y^2})_{i,\,j}\,U$$ by the following.

$$({\partial_h^2\over\partial_h x^2})_{1,\,j}\ U\ =$$. $${-\frac{1}{12}\,U_{4,\,j}+\frac{1}{3}\,U_{3,\,j} +\frac{1}{2}\,U_{2,\,j}-\frac{5}{3}\,U_{1,\,j} +\frac{11}{12}\,U_{0,\,j}\over\ h^2}$$

$$({\partial_h^2\over\partial_h x^2})_{i,\,j}\ U\ =$$. $${-\frac{1}{12}\,U_{i+2,\,j}+\frac{4}{3}\,U_{i+1,\,j} -\frac{5}{2}\,U_{i,\,j}+\frac{4}{3}\,U_{i-1,\,j} -\frac{1}{12}\,U_{i-2,\,j}\over\ h^2}$$

$$\text{for}\ i\,=\,2,\ 3,\ \ldots\, m-1 \quad \text{and}$$

$$({\partial_h^2\over\partial_h x^2})_{m,\,j}\ U\ =$$. $${\frac{11}{12}\,U_{m+1,\,j}-\frac{5}{3}\,U_{m,\,j} +\frac{1}{2}\,U_{m-1,\,j}+\frac{1}{3}\,U_{m-2,\,j} -\frac{1}{12}\,U_{m-3,\,j}\over\ h^2}$$

$$({\partial_k^2\over\partial_k y^2})_{i,\,1}\ U\ =$$. $${-\frac{1}{12}\,U_{i,\,4}+\frac{1}{3}\,U_{i,\,3} +\frac{1}{2}\,U_{i,\,2}-\frac{5}{3}\,U_{i,\,1} +\frac{11}{12}\,U_{i,\,0}\over\ k^2}$$

$$({\partial_k^2\over\partial_k y^2})_{i,\,j}\ U\ =$$. $${-\frac{1}{12}\,U_{i,\,j+2}+\frac{4}{3}\,U_{i,\,j+1} -\frac{5}{2}\,U_{i,\,j}+\frac{4}{3}\,U_{i,\,j-1} -\frac{1}{12}\,U_{i,\,j-2}\over\ k^2}$$

$$\text{for}\ j\,=\,2,\ 3,\ \ldots\, n-1 \quad \text{and}$$

$$({\partial_k^2\over\partial_k y^2})_{i,\,n}\ U\ =$$. $${\frac{11}{12}\,U_{i,\,n+1}-\frac{5}{3}\,U_{i,\,n} +\frac{1}{2}\,U_{i,\,n-1}+\frac{1}{3}\,U_{i,\,n-2} -\frac{1}{12}\,U_{i,\,n-3}\over\ k^2}$$

simulation of the problem
To simulate the problem (1.0)   let

$$\begin{align} U_{0,\,j}\ \ & = & g(x_0,\ y_j)\,, & \quad 0 \le j \le n+1 \\ U_{m+1,\,j}\ \ & = & g(x_{m+1},\ y_j)\,, & \quad 0 \le j \le n+1 \\ U_{i,\,0}\ \ & = & g(x_i,\ y_0)\,, & \quad 0 \le i \le m+1 \\ U_{i,\,n+1}\ \ & = & g(x_i,\ y_{n+1)}\,, & \quad 0 \le i \le m+1 \\ \end{align}$$

Then solve the non-singular linear system $$-\Delta_{i,\,j}\,U = f(x_i,\,y_j) \quad 1 \le i \le m, \ 1 \le j \le n$$ ; for the remaining $$U_{i,\,j}$$

error estimation
The error $$U - u = \{U_{i,\,j} - u(x_i,\,y_j)\}$$ , satisfies $${\lVert U - u \rVert}_2 \le {\rho\, a^2 b^2\over\ \pi^2(a^2+b^2)} {\lVert \tau \rVert}_2 (1 + O(h^2+k^2))$$. for $$\tau = \{\tau (x_i,\,y_j)\}, \quad 1 \le i \le m,\ 1 \le j \le n$$ , and $$\rho \ \le \ 1\, /\, (1 - (1 + \sqrt 10 )\,/\,12)$$.

proof of truncation error estimates
The truncation error estimates for $$\big ({\partial_h^2\over\partial_h x^2}\,u(x_i,\,y_j)\big )$$ are done under the assumption that $$u(x,\;y)$$  is sufficiently smooth so that $${\partial^5\over\partial x^5}\,u(x,\;y)$$ is continuous. For notational convenience let $$g(x)\;=\;u(x,\;y_j).$$ Expand $$g(x)$$  in it's Taylor expansion about $$x_i$$, $$\begin{align} g(x_i+\Delta x)\; & =\;g(x_i)+g^{(1)}(x_i)\Delta x +\frac{1}{2}\,g^{(2)}(x_i)({\Delta x})^2 +\frac{1}{6}\,g^{(3)}(x_i)({\Delta x})^3 \\ & +\frac{1}{24}\,g^{(4)}(x_i)({\Delta x})^4 +\frac{1}{120}\,g^{(5)}(z)({\Delta x})^5 \\ \end{align}$$. where $$z$$  is some number between  $$x_i$$ and $$x_i+\Delta x$$. Then $$\begin{align} & {\partial_h^2\over\partial_h x^2}\,u(x_1,\,y_j)\;= \\ & {\big ( -\frac{1}{12}g(x_1+3\,h)+\frac{1}{3}g(x_1+2\,h) +\frac{1}{2}g(x_1+h)-\frac{5}{3}g(x_1)+\frac{11}{12}g(x_1-h) \big ) \over\ h^2} \end{align}$$ $$\begin{align} & =\;g(x_1){(-\frac{1}{12}+\frac{1}{3}+\frac{1}{2}-\frac{5}{3}+\frac{11}{12})\over\ h^2} \\ & +\;g^{(1)}(x_1){(-\frac{1}{12}(3\,h)+\frac{1}{3}(2\,h)+\frac{1}{2}(h)-\frac{5}{3}(0)+\frac{11}{12}(-h))\over\ h^2} \\ & +\;\frac{1}{2}g^{(2)}(x_1){(-\frac{1}{12}(3\,h)^2+\frac{1}{3}(2\,h)^2+\frac{1}{2}(h)^2-\frac{5}{3}(0)^2+\frac{11}{12}(-h)^2)\over\ h^2} \\ & +\;\frac{1}{6}g^{(3)}(x_1){(-\frac{1}{12}(3\,h)^3+\frac{1}{3}(2\,h)^3+\frac{1}{2}(h)^3-\frac{5}{3}(0)^3+\frac{11}{12}(-h)^3)\over\ h^2} \\ & +\;\frac{1}{24}g^{(4)}(x_1){(-\frac{1}{12}(3\,h)^4+\frac{1}{3}(2\,h)^4+\frac{1}{2}(h)^4-\frac{5}{3}(0)^4+\frac{11}{12}(-h)^4)\over\ h^2} \\ & +\;\frac{1}{120}{(-\frac{1}{12}g^{(5)}(z_1)(3\,h)^5+\frac{1}{3}g^{(5)}(z_2)(2\,h)^5+\frac{1}{2}g^{(5)}(z_3)(h)^5-\frac{5}{3}(0)^5\over\ h^2} \\ & \quad \quad \quad \quad {+\frac{11}{12}g^{(5)}(z_4)(-h)^5)\over\ h^2} \\ \end{align}$$ $$=\;g^{(2)}(x_1) + {h^3\over\ 120}\big (-\frac{81}{4}g^{(5)}(z_1) +\frac{32}{3}g^{(5)}(z_2)+\frac{1}{2}g^{(5)}(z_3)-\frac{11}{12}g^{(5)}(z_4)\big ) $$ where $$\begin{align} & x_1\;<\;z_1\;<\;x_1+3\,h\,,\;x_1\;<\;z_2\;<\;x_1+2\,h\,,\; \\ & x_1\;<\;z_3\;<\;x_1+h\,,\;\;\text{and}\;\;x_1-h\;<\;z_4\;<\;x_1. \\ \end{align}$$. Since $$\frac{67}{6}\underset{0\; \le\; \phi \; \le\; 2}{\min(g^{(5)}(x_1\;+\;\phi\,h))}\; \le \;\frac{32}{3}g^{(5)}(z_2)+\frac{1}{2}g^{(5)}(z_3)\; \le \;\frac{67}{6}\underset{0\; \le\; \phi \; \le\; 2 }{\max(g^{(5)}(x_1\;+\;\phi\,h))},$$ $$\frac{254}{12}\underset{-1\; \le\; \phi \; \le\; 3}{\min(g^{(5)}(x_1\;+\;\phi\,h))}\; \le \;\frac{81}{4}g^{(5)}(z_1)+\frac{11}{12}g^{(5)}(z_4)\; \le \;\frac{254}{12}\underset{-1\; \le\; \phi \; \le\; 3 }{\max(g^{(5)}(x_1\;+\;\phi\,h))},$$ from the intermediate value property $$\frac{32}{3}g^{(5)}(z_2)+\frac{1}{2}g^{(5)}(z_3)\; = \;\frac{67}{6}\, g^{(5)}(x_1\;+\;\phi_1\,h)),\quad \text{for} \; 0\;<\;\phi_1\,,\;<\;2\,,$$ $$\frac{81}{4}g^{(5)}(z_1)+\frac{11}{12}g^{(5)}(z_4)\; = \;\frac{254}{12}\, g^{(5)}(x_1\;+\;\phi_2\,h),\quad \text{for} \; -1\;<\;\phi_2\;<\;3\,.$$.  This gives  $$\begin{align} & {\partial_h^2\over\partial_h x^2}\,u(x_1,\,y_j)\;=\; g^{(2)}(x_1) + {h^3\over\ 120}\big ( \frac{67}{6}\,g^{(5)}(x_1+\phi_1\,h) - \frac{254}{12}\,g^{(5)}(x_1+\phi_2\,h)\big ) \\ & \;=\; {\partial^2\over\partial x^2}\,u(x_1,\,y_j) + {h^3\over\ 120}\big ( \frac{67}{6}{\partial^5\over\partial x^5} u(x_1+\phi_1\,h,\;y_j) - \frac{254}{12}{\partial^5\over\partial x^5} u(x_1+\phi_2\,h,\;y_j)\big ) \\ \end{align}$$  which is  $$\tau_x(x_1,\,y_j)\;=\;{h^3\over\ 120}M_x^5(x_1,\,y_j).$$

For $$i\;=\;2,\;3,\;\ldots\,,\;m-1$$ $$\begin{align} & {\partial_h^2\over\partial_h x^2}\,u(x_i,\,y_j)\;= \\ & {\big ( -\frac{1}{12}g(x_i+2\,h)+\frac{4}{3}g(x_i+h) -\frac{5}{2}g(x_i)+\frac{4}{3}g(x_i-h)-\frac{1}{12}g(x_i-2\,h) \big ) \over\ h^2} \end{align}$$ $$\begin{align} & =\;g(x_i){(-\frac{1}{12}+\frac{4}{3}-\frac{5}{2}+\frac{4}{3}-\frac{1}{12})\over\ h^2} \\ & +\;g^{(1)}(x_i){(-\frac{1}{12}(2\,h)+\frac{4}{3}(h)-\frac{5}{2}(0)+\frac{4}{3}(-h)-\frac{1}{12}(-2\,h))\over\ h^2} \\ & +\;\frac{1}{2}g^{(2)}(x_i){(-\frac{1}{12}(2\,h)^2+\frac{4}{3}(h)^2-\frac{5}{2}(0)^2+\frac{4}{3}(-h)^2-\frac{1}{12}(-2\,h)^2)\over\ h^2} \\ & +\;\frac{1}{6}g^{(3)}(x_i){(-\frac{1}{12}(2\,h)^3+\frac{4}{3}(h)^3-\frac{5}{2}(0)^3+\frac{4}{3}(-h)^3-\frac{1}{12}(-2\,h)^3)\over\ h^2} \\ & +\;\frac{1}{24}g^{(4)}(x_i){(-\frac{1}{12}(2\,h)^4+\frac{4}{3}(h)^4-\frac{5}{2}(0)^4+\frac{4}{3}(-h)^4-\frac{1}{12}(-2\,h)^4)\over\ h^2} \\ & +\;\frac{1}{120}{(-\frac{1}{12}g^{(5)}(z_1)(2\,h)^5+\frac{4}{3}g^{(5)}(z_2)(h)^5-\frac{5}{2}(0)^5+\frac{4}{3}g^{(5)}(z_3)(-h)^5\over\ h^2} \\ & \quad \quad \quad \quad {-\frac{1}{12}g^{(5)}(z_4)(-2\,h)^5)\over\ h^2} \\ \end{align}$$ $$=\;g^{(2)}(x_i) + {h^3\over\ 120}\big (-\frac{8}{3}g^{(5)}(z_1) +\frac{4}{3}g^{(5)}(z_2)-\frac{4}{3}g^{(5)}(z_3)+\frac{8}{3}g^{(5)}(z_4)\big ) $$ where $$\begin{align} & x_i\;<\;z_1\;<\;x_i+2\,h\,,\;x_i\;<\;z_2\;<\;x_i+h\,,\; \\ & x_i-h\;<\;z_3\;<\;x_i\,,\;\;\text{and}\;\;x_i-2\,h\;<\;z_4\;<\;x_i. \\ \end{align}$$. Reasoning as before, combining terms with like signs and using the intermediate value property, $$\frac{4}{3}g^{(5)}(z_2)+\frac{8}{3}g^{(5)}(z_4)\; = \;4\, g^{(5)}(x_i\;+\;\phi_1\,h)),\quad \text{for} \; -2\;<\;\phi_1\,,\;<\;1\,,$$ $$\frac{8}{3}g^{(5)}(z_1)+\frac{4}{3}g^{(5)}(z_3)\; = \;4\, g^{(5)}(x_i\;+\;\phi_2\,h),\quad \text{for} \; -1\;<\;\phi_2\;<\;2\,.$$.  This gives  $$\begin{align} & {\partial_h^2\over\partial_h x^2}\,u(x_i,\,y_j)\;=\; g^{(2)}(x_i) + {h^3\over\ 120}\big ( \frac{67}{6}\,g^{(5)}(x_i+\phi_1\,h) - \frac{254}{12}\,g^{(5)}(x_i+\phi_2\,h)\big ) \\ & \;=\; {\partial^2\over\partial x^2}\,u(x_i,\,y_j) + {h^3\over\ 120}\big ( 4{\partial^5\over\partial x^5} u(x_i+\phi_1\,h,\;y_j) - 4{\partial^5\over\partial x^5} u(x_i+\phi_2\,h,\;y_j)\big ) \\ \end{align}$$  which is  $$\tau_x(x_i,\,y_j)\;=\;{h^3\over\ 120}M_x^5(x_i,\,y_j).$$

Under the assumption that $${\partial^6\over\partial x^6}\,u(x,\;y)$$ is continuous, in the preceding argument, the expression $$\begin{align} & +\;\frac{1}{120}{(-\frac{1}{12}g^{(5)}(z_1)(2\,h)^5+\frac{4}{3}g^{(5)}(z_2)(h)^5-\frac{5}{2}(0)^5+\frac{4}{3}g^{(5)}(z_3)(-h)^5\over\ h^2} \\ & \quad \quad \quad \quad {-\frac{1}{12}g^{(5)}(z_4)(-2\,h)^5)\over\ h^2} \\ \end{align}$$ can be replaced by $$\begin{align} & +\;\frac{1}{120}g^{(5)}(x_i){(-\frac{1}{12}(2\,h)^5+\frac{4}{3}(h)^5-\frac{5}{2}(0)^5+\frac{4}{3}(-h)^5\over\ h^2} \\ & \quad \quad \quad \quad {-\frac{1}{12}(-2\,h)^5)\over\ h^2} \\ & \; \\ & +\;\frac{1}{720}{(-\frac{1}{12}g^{(6)}(z_1)(2\,h)^6+\frac{4}{3}g^{(6)}(z_2)(h)^6-\frac{5}{2}(0)^5+\frac{4}{3}g^{(6)}(z_3)(-h)^6\over\ h^2} \\ & \quad \quad \quad \quad {-\frac{1}{12}g^{(6)}(z_4)(-2\,h)^6)\over\ h^2} \\ \end{align}$$

This gives $$\begin{align} & {\partial_h^2\over\partial_h x^2}\,u(x_i,\,y_j)\;=\; \\ & \;=\; {\partial^2\over\partial x^2}\,u(x_i,\,y_j) + {h^4\over\ 720}\big ( \frac{8}{3}{\partial^6\over\partial x^6} u(x_i+\phi_1\,h,\;y_j) - \frac{32}{3}{\partial^6\over\partial x^6} u(x_i+\phi_2\,h,\;y_j)\big ) \\ \end{align}$$ with   $$-1\;<\;\phi_1\;<\;1\,,\quad -2\;<\;\phi_2\;<\;2$$    which is  $$\tau_x(x_i,\,y_j)\;=\;{h^4\over\ 720}M_x^6(x_i,\,y_j).$$ The remaining truncation error estimates are done in the same way.

end working
Let the error $$e = \{e_{i,\,j}\}, \quad 1 \le i \le m, \ 1 \le j \le n,$$ be defined by $$e_{i,\,j} = U_{i,\,j} - u(x_i, y_j)$$. $$U$$ is the solution of the finite difference scheme (xx) and $$u(x_i,\,y_j)$$ is the solution to (1.0). Since $$\begin{align} -\Delta_{i,\,j}\,e & = -\Delta_{i,\,j}\,U + \Delta_{h,\,k}\,u(x_i,\,y_j) \\ & = f(x_i,\,y_j) + (\Delta\,u(x_i,\,y_j) + \tau (x_i,\,y_j)) \\ & = \tau (x_i,\,y_j) \\ \end{align}$$ we get that $$ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-\Delta_{i,\,j}\,e) \ = \ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,\tau (x_i,\,y_j) \ \le \ {\lVert e \rVert}_2\, {\lVert \tau \rVert}_2 $$ . Next it will be shown that the operator $$-\Delta_{i,\,j}$$ is positive definite for $$e$$, in particular that $$ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-\Delta_{i,\,j}\,e) $$  $$ \ge \; \lambda (4 a^{-2}\,{(m+1)}^2\,(something)+4 b^{-2}\,{(n+1)}^2\,)\, {\lVert e \rVert}_2 $$ ,

with $$\lambda \ \ge \ 1 - (1+\sqrt 10)\,/\,12$$.

Begin with $$ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-\Delta_{i,\,j}\,e) \; = $$ $$ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e - ({\partial_k^2\over\partial_k y^2})_{i,\,j}\,e) \; = $$ $$ \sum_{j = 1}^n \sum_{i = 1}^m e_{i,\,j}\,(-({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e) \; + \; \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-({\partial_k^2\over\partial_k y^2})_{i,\,j}\,e) $$ The sum $$ \sum_{i = 1}^m e_{i,\,j}\,(-({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e)$$ will be estimated first.

$$-h^2({\partial_h^2\over\partial_h x^2})_{1,\,j}\ e\ =$$. $$\frac{1}{12}\,e_{4,\,j}-\frac{1}{3}\,e_{3,\,j} -\frac{1}{2}\,e_{2,\,j}+\frac{5}{3}\,e_{1,\,j} -\frac{11}{12}\,e_{0,\,j}$$

$$\begin{align} =\; & (\frac{1}{12}\,e_{3,\,j} -\frac{5}{4}\,e_{2,\,j} +\frac{5}{4}\,e_{1,\,j} -\frac{1}{12}\,e_{0,\,j}) \\ -\; & (-\frac{1}{12}\,e_{4,\,j} +\frac{5}{12}\,e_{3,\,j} -\frac{3}{4}\,e_{2,\,j} -\frac{5}{12}\,e_{1,\,j} +\frac{5}{6}\,e_{0,\,j}) \\ \end{align}$$

$$-h^2({\partial_h^2\over\partial_h x^2})_{i,\,j}\ e\ =$$. $$\frac{1}{12}\,e_{i+2,\,j}-\frac{4}{3}\,e_{i+1,\,j} +\frac{5}{2}\,e_{i,\,j}-\frac{4}{3}\,e_{i-1,\,j} +\frac{1}{12}\,e_{i-2,\,j}$$ $$\begin{align} =\; & (\frac{1}{12}\,e_{i+2,\,j} & -\frac{5}{4}\,e_{i+1,\,j} & +\frac{5}{4}\,e_{i,\,j} & -\frac{1}{12}\,e_{i-1,\,j}) \\ -\; & (\frac{1}{12}\,e_{i+1,\,j} & -\frac{5}{4}\,e_{i,\,j} \quad & +\frac{5}{4}\,e_{i-1,\,j} & -\frac{1}{12}\,e_{i-2,\,j}) \\ \end{align}$$

$$\text{for}\ i\,=\,2,\ 3,\ \ldots\, m-1 \quad \text{and}$$

$$-h^2({\partial_h^2\over\partial_h x^2})_{m,\,j}\ e\ =$$. $$-\frac{11}{12}\,e_{m+1,\,j}+\frac{5}{3}\,e_{m,\,j} -\frac{1}{2}\,e_{m-1,\,j}-\frac{1}{3}\,e_{m-2,\,j} +\frac{1}{12}\,e_{m-3,\,j}$$

$$\begin{align} =\; & (-\frac{5}{6}\,e_{m+1,\,j} +\frac{5}{12}\,e_{m,\,j} +\frac{3}{4}\,e_{m-1,\,j} +\frac{1}{4}\,e_{m-2,\,j} +\frac{1}{12}\,e_{m-3,\,j}) \\ -\; & (\frac{1}{12}\,e_{m+1,\,j} -\frac{5}{4}\,e_{m,\,j} +\frac{5}{4}\,e_{m-1,\,j} -\frac{1}{12}\,e_{m-2,\,j}) \\ \end{align}$$

The summation by parts formula  is now stated so it can be used.

$$ \sum_{i = 1}^m w_i\,(v_i - v_{i-1}) = w_m\,v_m - w_0\,v_0 - \sum_{i = 0}^{m-1} v_i\,(w_{i+1} - w_i) $$

$$Now, \quad -h^2({\partial_h^2\over\partial_h x^2})_{i,\,j}\ e\ = v_{i,\,j} - v_{i-1,\,j}, \quad \text{with}$$ $$v_{0,\,j}\; =\; (-\frac{1}{12}\,e_{4,\,j} +\frac{5}{12}\,e_{3,\,j} -\frac{3}{4}\,e_{2,\,j} -\frac{5}{12}\,e_{1,\,j} +\frac{5}{6}\,e_{0,\,j}) $$ $$ =\; -\frac{1}{12}\,(e_{4,\,j} - e_{3,\,j}) +\frac{1}{3}\,(e_{3,\,j} - e_{2,\,j}) -\frac{5}{12}\,(e_{2,\,j} - e_{1,\,j}) -\frac{5}{6}\,(e_{1,\,j} - e_{0,\,j}) $$ $$\begin{align} v_{i,\,j}\; & =\; (\frac{1}{12}\,e_{i+2,\,j} -\frac{5}{4}\,e_{i+1,\,j} +\frac{5}{4}\,e_{i,\,j} -\frac{1}{12}\,e_{i-1,\,j}) \\ \; & =\; \frac{1}{12}\,(e_{i+2,\,j} - e_{i+1,\,j}) -\frac{7}{6}\,(e_{i+1,\,j} - e_{i,\,j}) +\frac{1}{12}\,(e_{i,\,j} - e_{i-1,\,j}) \\ \end{align}$$ $$\text{for}\ i\,=\,1,\ 2,\ \ldots\, m-1 \quad \text{and}$$. $$v_{m,\,j}\; =\; (-\frac{5}{6}\,e_{m+1,\,j} +\frac{5}{12}\,e_{m,\,j} +\frac{3}{4}\,e_{m-1,\,j} +\frac{1}{4}\,e_{m-2,\,j} +\frac{1}{12}\,e_{m-3,\,j})$$ $$ =\; -\frac{5}{6}\,(e_{m+1,\,j} - e_{m,\,j}) -\frac{5}{12}\,(e_{m,\,j} - e_{m-1,\,j}) +\frac{1}{3}\,(e_{m-1,\,j} - e_{m-2,\,j}) -\frac{1}{12}\,(e_{m-2,\,j} - e_{m-3,\,j}) $$

$$\text{So the sum} \; \sum_{i = 1}^m e_{i,\,j}\,(-h^2({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e)=\; \sum_{i = 1}^m e_{i,\,j}\,(v_{i,\,j} - v_{i-1,\,j})$$ $$ = e_{m,\,j}\,v_{m,\,j} - e_{0,\,j}\,v_{0,\,j} - \sum_{i = 0}^{m-1} v_{i,\,j}\,(e_{i+1,\,j} - e_{i,\,j}) $$ Taking into account that  $$e_{m+1,\,j}\, = \, e_{0,\,j}\, = \, 0$$ it follows $$ \sum_{i = 1}^m e_{i,\,j}\,(-h^2({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e)=\; - \sum_{i = 0}^m v_{i,\,j}\,(e_{i+1,\,j} - e_{i,\,j}) $$ $$ =\; - v_{0,\,j}\,(e_{1,\,j} - e_{0,\,j}) - \sum_{i = 1}^{m-1} v_{i,\,j}\,(e_{i+1,\,j} - e_{i,\,j}) - v_{m,\,j}\,(e_{m+1,\,j} - e_{m,\,j}) $$

$$\begin{align} =\; & \frac{1}{12}\,(e_{4,\,j} - e_{3,\,j})\,(e_{1,\,j} - e_{0,\,j}) -\frac{1}{3}\,(e_{3,\,j} - e_{2,\,j})\,(e_{1,\,j} - e_{0,\,j}) \\ & +\frac{5}{12}\,(e_{2,\,j} - e_{1,\,j})\,(e_{1,\,j} - e_{0,\,j}) +\frac{5}{6}\,(e_{1,\,j} - e_{0,\,j})^2 \\ \; & -\frac{1}{12}\sum_{i = 1}^{m-1}(e_{i+2,\,j} - e_{i+1,\,j}) (e_{i+1,\,j} - e_{i,\,j}) +\frac{7}{6}\sum_{i = 1}^{m-1}(e_{i+1,\,j} - e_{i,\,j})^2 \\ \; & -\frac{1}{12}\sum_{i = 1}^{m-1}(e_{i,\,j} - e_{i-1,\,j}) (e_{i+1,\,j} - e_{i,\,j}) +\frac{5}{6}\,(e_{m+1,\,j} - e_{m,\,j})^2 \\ \; & +\frac{5}{12}\,(e_{m,\,j} - e_{m-1,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \; -\frac{1}{3}\,(e_{m-1,\,j} - e_{m-2,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \\ \; & +\frac{1}{12}\,(e_{m-2,\,j} - e_{m-3,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \\ \end{align}$$

working here
Collect like terms in the expression immediately above as follows. $$\frac{5}{6}\,(e_{1,\,j} - e_{0,\,j})^2+\frac{7}{6}\sum_{i = 1}^{m-1}(e_{i+1,\,j} - e_{i,\,j})^2+\frac{5}{6}\,(e_{m+1,\,j} - e_{m,\,j})^2$$ $$=\, \frac{7}{6}\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2-\frac{1}{3}\,(e_{1,\,j} - e_{0,\,j})^2-\frac{1}{3}\,(e_{m+1,\,j} - e_{m,\,j})^2$$ $$-\frac{1}{12}\sum_{i = 1}^{m-1}(e_{i+2,\,j} - e_{i+1,\,j}) (e_{i+1,\,j} - e_{i,\,j})-\frac{1}{12}\sum_{i = 1}^{m-1}(e_{i,\,j} - e_{i-1,\,j})(e_{i+1,\,j} - e_{i,\,j})$$ $$\begin{align} =\, & -\frac{1}{6}\sum_{i = 2}^{m-1}(e_{i+1,\,j} - e_{i,\,j}) (e_{i,\,j} - e_{i-1,\,j})-\frac{1}{12}(e_{2,\,j} - e_{1,\,j}) (e_{1,\,j} - e_{0,\,j}) \\ \, & -\frac{1}{12}(e_{m+1,\,j} - e_{m,\,j})(e_{m,\,j} - e_{m-1,\,j}) \\ \end{align}$$

Now, rewrite the expression after making cancellations. $$\begin{align} & \frac{7}{6}\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2-\frac{1}{6}\sum_{i = 2}^{m-1}(e_{i+1,\,j} - e_{i,\,j})(e_{i,\,j} - e_{i-1,\,j})-\frac{1}{3}\,(e_{1,\,j} - e_{0,\,j})^2 \\ & -\frac{1}{3}\,(e_{m+1,\,j} - e_{m,\,j})^2+\frac{1}{12}\,(e_{4,\,j} - e_{3,\,j})\,(e_{1,\,j} - e_{0,\,j}) \\ & -\frac{1}{3}\,(e_{3,\,j} - e_{2,\,j})\,(e_{1,\,j} - e_{0,\,j})+\frac{1}{3}\,(e_{2,\,j} - e_{1,\,j})\,(e_{1,\,j} - e_{0,\,j}) \\ & +\frac{1}{3}\,(e_{m,\,j} - e_{m-1,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \; -\frac{1}{3}\,(e_{m-1,\,j} - e_{m-2,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \\ \; & +\frac{1}{12}\,(e_{m-2,\,j} - e_{m-3,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \\ \end{align}$$ The following simple inequality will be used to bound terms. $$\begin{align} (a-b)^2 & = a^2 - 2\,a\,b + b^2\, \ge \, 0 \\ 2\,a\,b \, & \le \, a^2 + b^2 \\ a\,b \, & \le \, \frac{1}{2}\,(a^2 + b^2) \\ \end{align}$$ and also $$ a\,b \, \le \, \frac{1}{2}\,(\alpha^2\,a^2 + b^2\,/\,\alpha^2) $$. $$\begin{align} & \sum_{i = 2}^{m-1}\left\vert (e_{i+1,\,j} - e_{i,\,j})(e_{i,\,j} - e_{i-1,\,j}) \right\vert\;=\;\sum_{i = 3}^{m-2}\left\vert (e_{i+1,\,j} - e_{i,\,j})(e_{i,\,j} - e_{i-1,\,j}) \right\vert \\ & +\;\left\vert (e_{3,\,j} - e_{2,\,j})(e_{2,\,j} - e_{1,\,j}) \right\vert\;+\;\left\vert (e_{m,\,j} - e_{m-1,\,j})(e_{m-1,\,j} - e_{m-2,\,j}) \right\vert \\ & \;\le\;\frac{1}{2}(\sum_{i = 3}^{m-2}(e_{i+1,\,j} - e_{i,\,j})^2\,+\,\sum_{i = 3}^{m-2}(e_{i,\,j} - e_{i-1,\,j})^2) \\ & +\;\left\vert (e_{3,\,j} - e_{2,\,j})(e_{2,\,j} - e_{1,\,j}) \right\vert\;+\;\left\vert (e_{m,\,j} - e_{m-1,\,j})(e_{m-1,\,j} - e_{m-2,\,j}) \right\vert \\ \end{align}$$

$$\begin{align} & =\;\sum_{i = 3}^{m-3}(e_{i+1,\,j} - e_{i,\,j})^2\;+\;\frac{1}{2}\,(e_{3,\,j} - e_{2,\,j})^2\;+\;\frac{1}{2}\,(e_{m-1,\,j} - e_{m-2,\,j})^2 \\ & +\;\left\vert (e_{3,\,j} - e_{2,\,j})(e_{2,\,j} - e_{1,\,j}) \right\vert\;+\;\left\vert (e_{m,\,j} - e_{m-1,\,j})(e_{m-1,\,j} - e_{m-2,\,j}) \right\vert \\ \end{align}$$ $$\begin{align} & =\;\sum_{i = 2}^{m-2}(e_{i+1,\,j} - e_{i,\,j})^2\;-\;\frac{1}{2}\,(e_{3,\,j} - e_{2,\,j})^2\;-\;\frac{1}{2}\,(e_{m-1,\,j} - e_{m-2,\,j})^2 \\ & +\;\left\vert (e_{3,\,j} - e_{2,\,j})(e_{2,\,j} - e_{1,\,j}) \right\vert\;+\;\left\vert (e_{m,\,j} - e_{m-1,\,j})(e_{m-1,\,j} - e_{m-2,\,j}) \right\vert \\ \end{align}$$ $$\begin{align} & \le\;\sum_{i = 2}^{m-2}(e_{i+1,\,j} - e_{i,\,j})^2\;-\;\frac{1}{2}\,(e_{3,\,j} - e_{2,\,j})^2\;-\;\frac{1}{2}\,(e_{m-1,\,j} - e_{m-2,\,j})^2 \\ & +\;\frac{1}{2}\,(\alpha_1^2\,(e_{3,\,j} - e_{2,\,j})^2+(e_{2,\,j} - e_{1,\,j})^2\,/\,\alpha_1^2) \\ & +\;\frac{1}{2}\,(\beta_1^2\,(e_{m-1,\,j} - e_{m-2,\,j})^2+(e_{m,\,j} - e_{m-1,\,j})^2\,/\,\beta_1^2) \\ \end{align}$$ $$\begin{align} & \;\left\vert (e_{4,\,j} - e_{3,\,j})(e_{1,\,j} - e_{0,\,j}) \right\vert \;\le \;\frac{1}{2}\,(\alpha_2^2\,(e_{4,\,j} - e_{3,\,j})^2+(e_{1,\,j} - e_{0,\,j})^2\,/\,\alpha_2^2) \\ & \;\left\vert (e_{3,\,j} - e_{2,\,j})(e_{1,\,j} - e_{0,\,j}) \right\vert \;\le \;\frac{1}{2}\,(\alpha_3^2\,(e_{3,\,j} - e_{2,\,j})^2+(e_{1,\,j} - e_{0,\,j})^2\,/\,\alpha_3^2) \\ & \;\left\vert (e_{2,\,j} - e_{1,\,j})(e_{1,\,j} - e_{0,\,j}) \right\vert \;\le \;\frac{1}{2}\,(\alpha_4^2\,(e_{2,\,j} - e_{1,\,j})^2+(e_{1,\,j} - e_{0,\,j})^2\,/\,\alpha_4^2) \\ \end{align}$$ $$\begin{align} & \;\left\vert (e_{m,\,j} - e_{m-1,\,j})(e_{m+1,\,j} - e_{m,\,j}) \right\vert \\ & \quad \quad \quad \le \;\frac{1}{2}\,(\beta_2^2\,(e_{m,\,j} - e_{m-1,\,j})^2+(e_{m+1,\,j} - e_{m,\,j})^2\,/\,\beta_2^2) \\ & \;\left\vert (e_{m-1,\,j} - e_{m-2,\,j})(e_{m+1,\,j} - e_{m,\,j}) \right\vert \\ & \quad \quad \quad \le \;\frac{1}{2}\,(\beta_3^2\,(e_{m-1,\,j} - e_{m-2,\,j})^2+(e_{m+1,\,j} - e_{m,\,j})^2\,/\,\beta_3^2) \\ & \;\left\vert (e_{m-2,\,j} - e_{m-3,\,j})(e_{m+1,\,j} - e_{m,\,j}) \right\vert \\ & \quad \quad \quad \le \;\frac{1}{2}\,(\beta_4^2\,(e_{m-2,\,j} - e_{m-3,\,j})^2+(e_{m+1,\,j} - e_{m,\,j})^2\,/\,\beta_4^2) \\ \end{align}$$

Now, substitute all the inequalities into the expression.

$$\begin{align} & \frac{7}{6}\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2-\frac{1}{6}\sum_{i = 2}^{m-1}(e_{i+1,\,j} - e_{i,\,j})(e_{i,\,j} - e_{i-1,\,j})-\frac{1}{3}\,(e_{1,\,j} - e_{0,\,j})^2 \\ & -\frac{1}{3}\,(e_{m+1,\,j} - e_{m,\,j})^2+\frac{1}{12}\,(e_{4,\,j} - e_{3,\,j})\,(e_{1,\,j} - e_{0,\,j}) \\ & -\frac{1}{3}\,(e_{3,\,j} - e_{2,\,j})\,(e_{1,\,j} - e_{0,\,j})+\frac{1}{3}\,(e_{2,\,j} - e_{1,\,j})\,(e_{1,\,j} - e_{0,\,j}) \\ & +\frac{1}{3}\,(e_{m,\,j} - e_{m-1,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \; -\frac{1}{3}\,(e_{m-1,\,j} - e_{m-2,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \\ \; & +\frac{1}{12}\,(e_{m-2,\,j} - e_{m-3,\,j})\,(e_{m+1,\,j} - e_{m,\,j}) \\ \end{align}$$

$$\begin{align} & \frac{7}{6}\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2-\frac{1}{6}\,\big ( \;\sum_{i = 2}^{m-2}(e_{i+1,\,j} - e_{i,\,j})^2\;-\;\frac{1}{2}\,(e_{3,\,j} - e_{2,\,j})^2\; \\ & -\;\frac{1}{2}\,(e_{m-1,\,j} - e_{m-2,\,j})^2 +\;\frac{1}{2}\,(\alpha_1^2\,(e_{3,\,j} - e_{2,\,j})^2+(e_{2,\,j} - e_{1,\,j})^2\,/\,\alpha_1^2) \\ & +\;\frac{1}{2}\,(\beta_1^2\,(e_{m-1,\,j} - e_{m-2,\,j})^2+(e_{m,\,j} - e_{m-1,\,j})^2\,/\,\beta_1^2)\; \big ) \\ & -\frac{1}{3}\,(e_{1,\,j} - e_{0,\,j})^2 -\frac{1}{3}\,(e_{m+1,\,j} - e_{m,\,j})^2 \\ & - \frac{1}{12}\,\big ( \frac{1}{2}\,(\alpha_2^2\,(e_{4,\,j} - e_{3,\,j})^2+(e_{1,\,j} - e_{0,\,j})^2\,/\,\alpha_2^2)\big ) \\ & - \frac{1}{3}\,\big ( \frac{1}{2}\,(\alpha_3^2\,(e_{3,\,j} - e_{2,\,j})^2+(e_{1,\,j} - e_{0,\,j})^2\,/\,\alpha_3^2)\big ) \\ & - \frac{1}{3}\,\big ( \frac{1}{2}\,(\alpha_4^2\,(e_{2,\,j} - e_{1,\,j})^2+(e_{1,\,j} - e_{0,\,j})^2\,/\,\alpha_4^2)\big ) \\ & - \frac{1}{3}\,\big ( \frac{1}{2}\,(\beta_2^2\,(e_{m,\,j} - e_{m-1,\,j})^2+(e_{m+1,\,j} - e_{m,\,j})^2\,/\,\beta_2^2)\big ) \\ & - \frac{1}{3}\,\big ( \frac{1}{2}\,(\beta_3^2\,(e_{m-1,\,j} - e_{m-2,\,j})^2+(e_{m+1,\,j} - e_{m,\,j})^2\,/\,\beta_3^2)\big ) \\ & - \frac{1}{12}\,\big ( \frac{1}{2}\,(\beta_4^2\,(e_{m-2,\,j} - e_{m-3,\,j})^2+(e_{m+1,\,j} - e_{m,\,j})^2\,/\,\beta_4^2)\big ) \\ \end{align}$$

$$\begin{align} & =\;\frac{7}{6}\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2-\frac{1}{6} \;\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2 \\ & \;-\;\big (\,\frac{1}{6}\,+\, (\frac{1}{24})\,/\,\alpha_2^2\,+\, (\frac{1}{6})\,/\,\alpha_3^2\,+\, (\frac{1}{6})\,/\,\alpha_4^2\,\big )\,(e_{1,\,j} - e_{0,\,j})^2 \\ & \;-\;\big (\,-\frac{1}{6}\,+\,(\frac{1}{12})\,/\,\alpha_1^2\,+\, \frac{1}{6}\,\alpha_4^2\,\big )\,(e_{2,\,j} - e_{1,\,j})^2 \\ & \;-\;\big (\,-\frac{1}{12}\,+\, \frac{1}{12}\,\alpha_1^2\,+\, \frac{1}{6}\,\alpha_3^2\,\big )\,(e_{3,\,j} - e_{2,\,j})^2 \\ & \;-\;\big (\,\frac{1}{24}\,\alpha_2^2\,\big )\,(e_{4,\,j} - e_{3,\,j})^2 \\ & \;-\;\big (\,\frac{1}{6}\,+\, (\frac{1}{6})\,/\,\beta_2^2\,+\, (\frac{1}{6})\,/\,\beta_3^2\,+\, (\frac{1}{24})\,/\,\beta_4^2\,\big )\,(e_{m+1,\,j} - e_{m,\,j})^2 \\ & \;-\;\big (\,-\frac{1}{6}\,+\,(\frac{1}{12})\,/\,\beta_1^2\,+\,\frac{1}{6}\,\beta_2^2\,\big )\,(e_{m,\,j} - e_{m-1,\,j})^2 \\ & \;-\;\big (-\frac{1}{12}\,+\,\frac{1}{12}\,\beta_1^2\,+\,\frac{1}{6}\,\beta_3^2\,\big )\,(e_{m-1,\,j} - e_{m-2,\,j})^2 \\ & \;-\;\big (\,\frac{1}{24}\,\beta_4^2\,\big )\,(e_{m-2,\,j} - e_{m-3,\,j})^2 \\ \end{align}$$ The choice $$\begin{align} & \alpha_1^2 = \beta_1^2 = (\sqrt 5-1)\,/\,2\,, \alpha_2^2 = \beta_4^2 = 8\,, \\ & \alpha_3^2 = \alpha_4^2 = \beta_2^2 = \beta_3^2 = (11-\sqrt 5)\,/\,4\,, \\ \end{align}$$ bounds all of the coefficients in the $$\alpha_i \text{'s}$$ and $$\beta_i \text{'s}$$  by  $$\frac{1}{3}$$   and yields the long sought inequality $$\sum_{i = 1}^m e_{i,\,j}\,(-h^2\,({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e)\;\ge\;\frac{2}{3}\sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2$$. and $$\sum_{j = 1}^n \sum_{i = 1}^m e_{i,\,j}\,(-\,({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e)\;\ge\;\frac{2}{3}\,h^{-2}\,\sum_{j = 1}^n \sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2$$. Reasoning in the exact same manner for the dimension in $$y$$ $$\sum_{j = 1}^n e_{i,\,j}\,(-k^2\,({\partial_k^2\over\partial_k y^2})_{i,\,j}\,e)\;\ge\;\frac{2}{3}\sum_{j = 0}^n(e_{i,\,j+1} - e_{i,\,j})^2$$. and $$\sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-\,({\partial_k^2\over\partial_k y^2})_{i,\,j}\,e)\;\ge\;\frac{2}{3}\,k^{-2}\,\sum_{i = 1}^m \sum_{j = 0}^n(e_{i,\,j+1} - e_{i,\,j})^2$$. Applying $$ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-\Delta_{i,\,j}\,e) \; = $$ $$ \sum_{j = 1}^n \sum_{i = 1}^m e_{i,\,j}\,(-({\partial_h^2\over\partial_h x^2})_{i,\,j}\,e) \; + \; \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-({\partial_k^2\over\partial_k y^2})_{i,\,j}\,e) $$ leads to the inequality $$ \sum_{i = 1}^m \sum_{j = 1}^n e_{i,\,j}\,(-\Delta_{i,\,j}\,e) $$ $$\ge\;\frac{2}{3}\,h^{-2}\,\sum_{j = 1}^n \sum_{i = 0}^m(e_{i+1,\,j} - e_{i,\,j})^2\;+\;\frac{2}{3}\,k^{-2}\,\sum_{i = 1}^m \sum_{j = 0}^n(e_{i,\,j+1} - e_{i,\,j})^2$$.

save scratch
$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\le\,2\,\textstyle\sum_{i\,=\,1}^{k}\,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}\,v_j^2\;} \,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}(v_j-v_{j-1})^2} $$

$$\le\,2\,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}\,v_j^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}(v_j-v_{j-1})^2} $$

$$=\,2\,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)\,v_i^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)(v_i-v_{i-1})^2} $$

$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\le\,2\,k\,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}(v_i-v_{i-1})^2} $$ Now, suppose for some $$\alpha$$

$$\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,v_i^2}\le\,\alpha\,k \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}(v_i-v_{i-1})^2} $$

$$v_k^2\;\le\,2\,\alpha\,k\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$

So for $$i\;\le\;k$$

$$v_i^2\;\le\,2\,\alpha\,i\,\textstyle\sum_{j\,=\,1}^i(v_j-v_{j-1})^2 $$

$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;\le\,2\,\alpha\,(\textstyle\sum_{i\,=\,1}^{k}\,i)\,\textstyle\sum_{j\,=\,1}^k(v_j-v_{j-1})^2 $$

$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;\le\,\alpha\,\,k(k+1)\,\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2 $$

$$\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\,}\;\le\,\sqrt \alpha\,\sqrt{(1+\tfrac{1}{k})}\;k\,\sqrt{\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2} $$

$$\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;}\le\,2\,\sqrt k \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)(v_i-v_{i-1})^2} $$ zzzzzzzzzz So for $$i\;\le\;k$$

$$\textstyle\sum_{j\,=\,1}^{\,i}\,v_j^2\le\,2\,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)\,v_j^2\;} \,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)(v_j-v_{j-1})^2} $$

$$\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}\,v_j^2\le\,2\,\textstyle\sum_{i\,=\,1}^{k}\,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)\,v_j^2\;} \,\sqrt{\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)(v_j-v_{j-1})^2} $$

$$\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)v_i^2\le\,2\,k\,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)\,v_i^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)(v_i-v_{i-1})^2} $$

$$\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)v_i^2\;}\le\,2\,k \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)(v_i-v_{i-1})^2} $$

$$\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\le\,4\,k \,\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)(v_i-v_{i-1})^2 $$

$$\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,v_i^2\;}\le\,2\,\sqrt k \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)(v_i-v_{i-1})^2} $$

xxxxxxxxxxxxxxxx $$\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)v_i^2\le\,2\,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)\,v_j^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)(v_j-v_{j-1})^2} $$

$$\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)v_i^2\le\,2\,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)\,v_j^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,\textstyle\sum_{j\,=\,1}^{\,i}\,(i-j+1)(v_j-v_{j-1})^2} $$

$$\le\,2\,\sqrt k\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(k-i+1)\,v_i^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^{k}\,(v_i-v_{i-1})^2} $$

$$\textstyle\sum_{i\,=\,1}^k\,v_i^2\le\,2\,k\,\sqrt{\textstyle\sum_{i\,=\,1}^k\,v_i^2\;} \,\sqrt{\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2} $$

$$\sqrt{\textstyle\sum_{i\,=\,1}^k\,v_i^2\;}\le\,2\,k\,\sqrt{\textstyle\sum_{i\,=\,1}^k(v_i-v_{i-1})^2} $$