User:SigmaJargon/Math5610Assignment4

1.) Spline vs. Cubic Hermite
a.) Data:

$$s(0)=1$$

$$s'(0)=1$$

$$s(1)=2$$

$$s'(1)=y$$

$$s(2)=2$$

$$s'(2)=0$$

The coefficients of the piecewise cubic function on the first interval (0-1) are found by solving:

$$ \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\  1 & 1 & 1 & 1 \\  3 & 2 & 1 & 0 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 1 \\ 1 \\  2 \\  y \\ \end{bmatrix}$$

And on the second interval (1-2):

$$ \begin{bmatrix} 1 & 1 & 1 & 1 \\ 3 & 2 & 1 & 0 \\  8 & 4 & 2 & 1 \\  12 & 4 & 1 & 0 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 2 \\ y \\ 2 \\ 0 \\ \end{bmatrix} $$

b.) This becomes a cubic spline if the second derivative is continuous. Since we only have two intervals, this will hold if the second derivative for both equations is equal at 1.  That is, if:

$$6(y-1)-2(y-1)=6y-10y$$

$$4y-4=-4y$$

$$8y=4$$

$$y=1/2$$

2.) More on Cubic Splines
Data:

$$(x_1,y_1)=(1,1)$$

$$(x_2,y_2)=(2,8)$$

$$(x_3,y_3)=(3,27)$$

$$(x_4,y_4)=(4,64)$$

a.) I'm not sure I should even bother solving the system of equations to find the coefficients - it's pretty obvious that $$x^3$$ is what we're looking for.



b.)

$$\begin{bmatrix} 1 &  1  &  1  &  1  &  0   &  0  &  0  &  0  &  0   &  0\ \ 0\ \ 0 \\    8  &  4  &  2  &  1  &  0   &  0  &  0  &  0  &  0   &  0\ \ 0 \ 0 \\    0  &  0  &  0  &  0  &  8   &  4  &  2  &  1  &  0   &  0\ \ 0\ \ 0 \\    0  &  0  &  0  &  0  &  27  &  9  &  3  &  1  &  0   &  0\ \ 0\ \ 0 \\    0  &  0  &  0  &  0  &  0   &  0  &  0  &  0  &  27  &  9\ \ 3\ \ 1 \\    0  &  0  &  0  &  0  &  0   &  0  &  0  &  0  &  64  &  16\ \ 4\ \ 1 \\    12 &  4  &  1  &  0  &  -12 &  -4 &  -1 &  0  &  0   &  0\ \ 0\ \ 0 \\    0  &  0  &  0  &  0  &  27  &  6  &  1  &  0  &  -27 &  -6\ \ -1\ \ 0 \\    12 &  2  &  0  &  0  &  -12 &  -2 &  0  &  0  &  0   &  0\ \ 0\ \ 0 \\    0  &  0  &  0  &  0  &  18  &  2  &  0  &  0  &  -18 &  -2\ \ 0\ \ 0 \\    6  &  2  &  0  &  0  &  0   &  0  &  0  &  0  &  0   &  0\ \ 0\ \ 0 \\    0  &  0  &  0  &  0  &  0   &  0  &  0  &  0  &  24  &  2\ \ 0\ \ 0 \\ \end{bmatrix}\begin{bmatrix} a_1 \\ b_1 \\ c_1 \\ d_1 \\ a_2 \\ b_2 \\ c_2 \\ d_2 \\ a_3 \\ b_3 \\ c_3 \\ d_3 \\ \end{bmatrix}=\begin{bmatrix} 1 \\ 8 \\  8 \\  27 \\  27 \\  64 \\  0 \\  0 \\  0 \\  0 \\  0 \\  0 \\ \end{bmatrix}$$

which yields the polynomial:

$$ P(x) = \begin{cases} 2x^3-6x^2+11x-6, & if\ x\le2 \\ 2x^3-6x^2+11x-6, & if\ 2<x\le3 \\ -4x^3+48x^2-151x+156, & if\ 3<x \end{cases} $$



c.) The differences close to area the interpolating points are in is pretty similar (and what is depicted in the graphs). If we got a bit further out, however, the cubic spline quickly turns around and starts heading down, whereas the polynomial interpolation continues to increase.

3.) Linear Programming
First off, on the transformation of an inequality constraint to an equality constraint: if we require that $$Ax\le b$$, then it follows that $$Ax+y=b$$ for some "slack" vector $$y>0$$. A sign constraint is really just an inequality constraint, and can be treated in a similar way.

As for Meg, despite your claim that you'll provide us with the a program that will solve a LP problem in time independent of the size of the problem, I simply don't buy it. After all, consider this: say you have a problem with n variables subject to the equality constraint Ax=b. Even just accessing the values in A will take a multiple of n operations. And certainly a program must know the constraints before it able to find a solution subject to them, so it must therefore consume some number of operations based on the number of variables in the problem. With some research on Wikipedia and among scholarly papers I was not able to find an algorithm that made such a claim. There were many with very efficient polynomial time worst case solutions, but none that was completely independent of the problem size.

Of course, I'll look forward to the code that you will be providing us. I'll be pleasantly surprised if it truly can handle, say, a problem subject to a 5x10 equality constraint as easily as a problem subject to a 5,000x15,000,000 equality constraint.

4.) Polynomial Interpolation
It appears that, sadly, I once again run out of time. However, I'll give an outline for how I was thinking of tackling the problem, so as to garner some beautiful partial credit.

It is probably possible to prove this inductively. First you'd have to show that $$\sum_{i=0}^n L_i(x)=1$$. Then you do an inductive step of assuming $$\sum_{i=0}^n x_i^j*L_i(x)=x^j$$ and showing that $$x*\sum_{i=0}^n x_i^j*L_i(x)=\sum_{i=0}^n x_i^{j+1}*L_i(x)$$.

5.) Runge Interpolation and 6.) Judicious Interpolation
Uploading the 40 images for these two questions would be a headache and a chore. Therefore I provide 4 images - the interpolations at n=10 and n=20. If you'd like to generate more, then you can use these Maple worksheets I created:

Runge Interpolation

Judicious Interpolation

10th degree Runge



20th degree Runge

10th degree Judicious



20th degree Judicious

7.) Interpolation of Symmetric Data is Symmetric
We're interpolating over 2n+1 points, so we'll need a polynomial of degree 2n+1. Since n is a natural number, we then have that the interpolating polynomial is of odd degree.

For the sake of contradiction, let's say that the interpolate isn't odd. Then there is some point $$x_s$$ along the curve so that $$P(x_s)\ne-p(-x_s)$$. $$x_s$$ cannot be any of the interpolating points, since $$P(x_i)=y_i=-y_{-i}=-P(-x_{-i})$$. We can define $$P'(x)$$ by rotating $$P(x)$$ 180 degrees - that is, $$P'(x)=-P(-x)$$. All of the interpolating points lie along this polynomial, since $$P'(x_i)=-P(-x_i)=P(x_{-i})$$ for all i. However, since $$P(x_s)\ne-P(-x_s)$$, the point $$P'(x_s)=-P(-x_s)\ne P(x_s)$$, so $$P(x)$$ and $$P'(x)$$ are different polynomials. However, the interpolating polynomial is unique! Thus we have a contradiction, and it is proved that $$P(x)$$ is odd.

8.) Linear Independence of Bernstein-Bezier Basis Functions
The first and most important thing to note is that for Bersteing-Bezier basis functions is that if you expand the $$(1-x)^{n-i}$$, you always end up with a 1, plus a bunch of higher order terms. If you then multiply with x^i, the lowest order term in that basis function is order i. Furthermore, it is the last basis function to contain a term of order i, because we note that the i+1th basis function's lowest term is $$x^{i+1}$$, and so on.

Then consider $$c_0$$. The 0th basis function contains a non-zero term of order 0 (that is, constant). However, it is the last and thus the only basis function to contain such a term, and so since the sum of all basis functions is zero we conclude $$c_0=0$$.

We then assume the $$c_0=0,...,c_{k-2}=0,\ c_{k-1}=0$$, and then seek a proof that $$c_k=0$$. Consider the kth basis function. It is the last basis function to contain a term of order k. In the sum of basis functions, all basis functions less than the kth are 0, since $$c_0=0,...,c_{k-2}=0,\ c_{k-1}=0$$. So the kth basis function is the only non-zero function contributing a term of order k to the sum. However, since the sum equals 0, we conclude that $$c_k=0$$.

Thus, by induction, $$c_i=0$$ for all i.

9.) Uniqueness of Interpolating Polynomial
a.) Power Form

$$ \begin{bmatrix} 1  & 1  & 1 & 1 \\  8   & 4  & 2 & 1 \\  64  & 16 & 4 & 1 \\  512 & 64 & 8 & 1 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 1 \\ 2 \\  3 \\  4 \\ \end{bmatrix}$$

This yield $$a=1/56$$, $$b=-7/24$$, $$c=7/4$$, and $$d=-10/21$$. So $$P(x)=x^3/56-7x^2/24+7x/4-10/21$$.

b.) Lagrange Form

$$ L_1=\frac{(x-2)(x-4)(x-8)}{(1-2)(1-4)(1-8)} $$

$$ L_2=\frac{(x-1)(x-4)(x-8)}{(2-1)(2-4)(2-8)} $$

$$ L_3=\frac{(x-1)(x-2)(x-8)}{(4-1)(4-2)(4-8)} $$

$$ L_4=\frac{(x-1)(x-2)(x-4)}{(8-1)(8-2)(8-4)} $$

So $$ P(x)=1*L_1+2*L_2+3*L_3+4*L_4 \,\! $$. Substituting and simplifying everything down leaves us with: $$ P(x)=7/4*x-7/24*x^2-10/21+1/56*x^3 $$

c.) Newton Form

$$ P(x)=b_0+b_1(x-x_0)+b_2(x-x_0)(x-x_1)+b_3(x-x_0)(x-x_1)(x-x_2) $$

$$ P(x_0)=b_0 $$

$$ P(x_1)=b_0+b_1(x_1-x_0) $$

$$ P(x_2)=b_0+b_1(x_2-x_0)+b_2(x_2-x_0)(x_2-x_1) $$

$$ P(x_3)=b_0+b_1(x_3-x_0)+b_2(x_3-x_0)(x_3-x_1)+b_3(x_3-x_0)(x_3-x_1)(x_3-x_2) $$

$$ 1=b_0 $$

$$ 2=b_0+b_1(2-1)=1+b_1 $$

$$ 1=b_1 $$

$$ 3=b_0+b_1(4-1)+b_2(4-1)(4-2)=1+1(4-1)+b_2(4-1)(4-2)=4+6b_2 $$

$$ -1/6=b_2 $$

$$ 4=b_0+b_1(8-1)+b_2(8-1)(8-2)+b_3(8-1)(8-2)(8-4)=1+7-1+168b_3 $$

$$ 1/56=b_3 $$

So, $$ P(x)=1+(x-1)-(x-1)(x-2)/6+(x-1)(x-2)(x-4)/56 $$. If we expand all the terms and then simplify, we arrive at: $$P(x)=7/4*x-7/24*x^2-10/21+1/56*x^3$$

And, as expected, these are all the same. Huzzah.

10.) The Method of Undetermined Coefficients
We expect that since we have four points, we'll at least have accuracy up to cubic functions. We then want to check if we can find values for $$a_0,\ a_1,\ a_2,\ a_3$$ such that the we have exactness for quartic polynomials.

I claim that there are no $$a_0,\ a_1,\ a_2,\ a_3$$ that fulfill the conditions. To show this, consider this counter-example:

Let $$a=0,\ h=1$$. In order to find $$a_0,\ a_1,\ a_2,\ a_3$$, let's throw four specific functions at the equality, and see what $$a_0,\ a_1,\ a_2,\ a_3$$ turn up.

$$f_0(x)=x^4\,\!$$

$$f_0(x)=x^4+1\,\!$$

$$f_0(x)=x^4+2\,\!$$

$$f_0(x)=x^4+3\,\!$$

$$\int_0^1 f(x)\,dy=a_0f(0)+a_1f(1/3)+a_2f(1/2)+a_3f(1)$$. Plugging in the four functions above yields:

$$1/5=0a_0+a_1/81+a_2/16+1a_3\,\!$$

$$6/5=1a_0+82a_1/81+17a_2/16+2a_3\,\!$$

$$11/5=2a_0+163a_1/81+33a_2/16+3a_3\,\!$$

$$16/5=3a_0+244a_1/81+49a_2/16+4a_3\,\!$$

Or, in matrix form:

$$ \begin{bmatrix} 0 & 1/81  & 1/16  & 1 \\  1 & 82/81  & 17/16 & 2 \\  2 & 163/81 & 33/16 & 3 \\  3 & 244/81 & 49/16 & 4 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 1/5 \\  6/5  \\  11/5 \\  16/5 \\ \end{bmatrix} $$

$$ \begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 0.0750651 \\ -0.5958984 \\   1.4010417 \\   0.1197917 \\ \end{bmatrix} $$

However, if we consider a slightly different set of functions

$$f_0(x)=x^4+1\,\!$$

$$f_0(x)=x^4+2\,\!$$

$$f_0(x)=x^4+3\,\!$$

$$f_0(x)=x^4+5\,\!$$

And try to find $$a_0,\ a_1,\ a_2,\ a_3$$, we get:

$$ \begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix}=\begin{bmatrix} 0.7980469 \\   1.1707031  \\  -1.2312500 \\   0.2625000 \\ \end{bmatrix} $$

Which is different! So we can't find a solution for $$a_0,\ a_1,\ a_2,\ a_3$$ that works for all quartic functions. And if we can't find working weights for quartic function we aren't going to find them for higher order polynomials.

So! We now want to find the $$a_0,\ a_1,\ a_2,\ a_3$$ for cubic functions!

To do so, we use the procedure described in the 10/25 lecture.

$$L_0=\frac{(x-(t+h/3))(x-(t+h/2))(x-(t+h))}{(t-(t+h/3))(t-(t+h/2))(t-(t+h))}$$

$$L_1=\frac{(x-t)(x-(t+h/2))(x-(t+h))}{((t+h/3)-t)((t+h/3)-(t+h/2))((t+h/3)-(t+h))}$$

$$L_2=\frac{(x-t)(x-(t+h/3))(x-(t+h))}{((t+h/2)-t)((t+h/2)-(t+h/3))((t+h/2)-(t+h))}$$

$$L_3=\frac{(x-t)(x-(t+h/2))(x-(t+h/3))}{((t+h)-t)((t+h)-(t+h/2))((t+h)-(t+h/3))}$$

$$\int_t^{t+h} f(x)\,dx\approx\int_t^{t+h} L_0f(t)+L_1f(t+h/3)+L_2f(t+h/2)+L_3f(t+h)\,dx$$

$$=f(t)\int_t^{t+h} L_0\,dx+f(t+h/3)\int_t^{t+h} L_1\,dx+f(t+h/2)\int_t^{t+h} L_2\,dx+f(t+h)\int_t^{t+h} L_3\,dx$$

$$=h*(f(t)/6+3f(t+h/2)/3+f(t+h)/6)\,\!$$

But wait a moment - this is Simpson's Rule! How wonderfully convenient for the next problem...

$$$$

11.) Error Analysis
You sly dog you! The error analysis for (10) is the same as the error analysis for Simpson's Rule!

Dropping the variables used in the previous problem in favor of those standardly used for Simpson's Rule, we have $$E(c)=F(c+h)-F(c-h)-\frac{h}{3}(f(c-h)+4f(c)+f(c+h))$$

Expanding into a Taylor polynomial (of degree 5 for F parts, degree 4 for f parts) gives:

$$E(c)=F+hF^{(1)}+h^2*F^{(2)}/2+h^3*f^{(3)}/6+h^4*f^{(4)}/24+h^5*f^{(5)}/120-(F-hF^{(1)}+h^2*F^{(2)}/2-h^3*f^{(3)}/6+h^4*f^{(4)}/24-h^5*f^{(5)}/120)-h/3*(f-hf^{(1)}+h^2*f^{(2)}/2-h^3*f^{(3)}/6+h^4*f^{(4)}/24)-4*h*f/3-h/3*(f+hf^{(1)}+h^2*f^{(2)}/2+h^3*f^{(3)}/6+h^4*f^{(4)}/24)$$

After canceling out and simplifying down, we arrive at:

$$E(c)=(\frac{1}{120}+\frac{1}{120}-\frac{1}{72}-\frac{1}{72})h^5*f(c)^{(4)}=-\frac{h^5}{90}f(c)^{(4)}$$ plus higher order terms.

$$$$

12.) The Method of Undetermined Coefficients, Continued
Wait wait wait. h has no relation to the point we're considering. So it is arbitrary, another number we can also pick so as to maximize our polynomial degree. Since this is so, it quickly falls out that this formula is accurate for polynomials of degree as high as we wish: simply set $$a_0=1,\ a_1=a_2=0,\ a_3=-1$$. Then we have $$f'(a)=\frac{f(a)-f(a+h)}{h}$$. Of course, you should see where this is going - simply take a small h, or in general the limit as h goes to 0, and we have that $$f'(a)=\lim_{h\to 0}\frac{f(a)-f(a+h)}{h}$$. This we know to be true, as it is the definition of the derivative.