User:SigmaJargon/Sandbox

1.) The Reverse Triangle Inequality
Consider two vectors x and y. Without loss of generality, say $$\parallel x \parallel \ge \parallel y \parallel$$

Since $$\parallel \parallel$$ is a vector norm, the triangle inequality holds. Then if you think of x as x=x-y+y

$$\parallel x \parallel = \parallel x-y+y \parallel \le \parallel x-y \parallel + \parallel y \parallel$$

$$\parallel x \parallel - \parallel y \parallel \le \parallel x-y \parallel $$

Since $$\parallel x \parallel \ge \parallel y \parallel$$, $$\parallel x \parallel - \parallel y \parallel \ge 0$$ and thus $$\parallel x \parallel - \parallel y \parallel = \left | \parallel x \parallel - \parallel y \parallel \right \vert$$. So, finally:

$$\left | \parallel x \parallel - \parallel y \parallel \right \vert \le \parallel x-y \parallel $$

2.) An Induced Matrix Norm
If [s1,s2,...,sn] are the sums of the values of A (that is, sk=sum(i=1 to n)(a_{i,k))

$$\parallel A \parallel_{1} = \max_{\parallel x \parallel_{1}=1}{\parallel Ax \parallel_{1}}$$

$$Ax=\begin{bmatrix} a_{1,1}x_{1}+a_{1,2}x_{2}+\cdots+a_{1,n}x_{n} \\ a_{2,1}x_{1}+a_{2,2}x_{2}+\cdots+a_{2,n}x_{n} \end{bmatrix}$$

3.) Frobenius Norm
One definition of an induced matrix norm is that $$\parallel A \parallel=\max \begin{Bmatrix} \frac{\parallel Ax \parallel}{\parallel x \parallel}\ :\ x \in K^n\ with\ x \ne 0 \end{Bmatrix}$$

It follows immediately that if A=I, then $$\parallel A \parallel=1$$

However, $$\parallel I_2 \parallel_F=\sqrt(2) \ne 1$$, so the Frobenius norm is not an induced norm.

4.) Spectral Radius
For nxn matrix A and $$\epsilon>0$$ we can find a natural norm satisfying $$\rho(A) \le \parallel A \parallel \le \rho(A) $$

The left hand equality we verified in class. The difficult bit is the right-hand equality.

We can find a non-singular matrix P such that $$PAP^{-1} \equiv B \equiv \boldsymbol{\Lambda} + U$$

Where $$\boldsymbol{\Lambda} = $$

... Argh. Incomplete, this proof is. The completion can be found here

$$$$

5.) Inequities are Sharp
This inequality came about due to the study of error in linear systems. To be specific, given a system $$Ax=b$$ we want to know what will happen if the value of x changes by some error. That is, given an error e, we want to know the relative error in respect to the perturbed system $$A(x-e)=b-r$$. By subtraction, it follows that $$Ae=r$$.

To the end of studying this, we know four things:

$$\begin{array}{lcl} Ax=b & \rightarrow & \parallel b \parallel \le \parallel A \parallel \parallel x \parallel \\ A^{-1}b=x & \rightarrow & \parallel x \parallel \le \parallel A^{-1} \parallel \parallel b \parallel \\ Ae=r & \rightarrow & \parallel r \parallel \le \parallel A \parallel \parallel e \parallel \\ A^{-1}r=e & \rightarrow & \parallel e \parallel \le \parallel A^{-1} \parallel \parallel r \parallel \end{array}$$

By composing the 1st and 4th inequalities, we find:

$$\frac{\parallel e \parallel}{\parallel A \parallel \parallel x \parallel} \le \frac{\parallel A^{-1} \parallel \parallel r \parallel}{\parallel b \parallel}$$

Shuffle around the terms a bit:

$$\frac{\parallel e \parallel}{\parallel x \parallel} \le \parallel A \parallel \parallel A^{-1} \parallel \frac{ \parallel r \parallel}{\parallel b \parallel}$$

By composing the 2nd and 3rd inequalities, we find:

$$\frac{\parallel r \parallel}{\parallel A^{-1} \parallel \parallel b \parallel} \le \frac{\parallel A \parallel \parallel e \parallel}{\parallel x \parallel}$$

Shuffle around the terms a bit:

$$\frac{1}{\parallel A \parallel \parallel A^{-1} \parallel }\frac{\parallel r \parallel}{ \parallel b \parallel} \le \frac{ \parallel e \parallel}{\parallel x \parallel}$$

And thus we arrive at the desired inequality:

$$\frac{1}{\parallel A \parallel \parallel A^{-1} \parallel }\frac{\parallel r \parallel}{ \parallel b \parallel} \le \frac{ \parallel e \parallel}{\parallel x \parallel} \le \parallel A \parallel \parallel A^{-1} \parallel \frac{ \parallel r \parallel}{\parallel b \parallel}$$

A simple system that satisfies equalities on all counts is this: Let $$A=\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}, x=\begin{bmatrix} 1 \\ 0 \end{bmatrix}, e=\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

It follows that $$b=\begin{bmatrix} 1 \\ 0 \end{bmatrix}, r=\begin{bmatrix} 0 \\ 1 \end{bmatrix}\ and\ \parallel A \parallel = \parallel A^{-1} \parallel = \parallel x \parallel = \parallel e \parallel = \parallel b \parallel = \parallel r \parallel = 1$$ for both the 2-norm and the  $$\infty$$-norm, and all inequalities are satisfied.

6.) A Symmetric Matrix Has Real Eigenvalues
First consider a general Hermitian matrix A. If  $$\lambda$$ is an eigenvalue of A with associated eigenvector x, then

$$Ax=\lambda x \,\!$$

So, where H indicates the transpose conjugate, and * the conjugate: $$(Ax)^H=(\lambda x)^H \,\!$$

$$x^H A^H=\lambda^{*} x^H \,\!$$

$$x^H A^H x=\lambda^{*} x^H x \,\!$$

$$x^H A x=\lambda^{*} x^H x \,\!$$

Also, since $$Ax=\lambda x \,\!$$, $$x^H Ax=x^H \lambda x=\lambda x^H x \,\!$$

So we then have $$\lambda x^H x=\lambda^{*} x^H x \,\!$$, so $$\lambda=\lambda^{*}$$, and thus $$\lambda\,\!$$ is real.

Now consider a symmetric matrix B. Since the conjugate of a real matrix is the matrix itself, $$B^H={B^T}^{*}=B^T=B\,\!$$, so B is Hermitian, and thus has real eigenvalues.

7.) The Gershgorin Theorem
Consider a n x n matrix A and $$1 < p \le n$$. Then let $$D^{(p)} \,\!$$ be the union of p Gershgorin circles with $$D^{(p)} \,\!$$ disjoint from $$D^{(q)} \,\!$$, the remaining n-p Gershgorin circles. For $$0 \le \epsilon \le 1$$, let us define $$B(\epsilon)={b_{ij}(\epsilon)} \in \mathbb{C}$$ with

$$b_{ij}(\epsilon) = \begin{cases} a_{ii}, & \mbox{if }{i=j} \\ \epsilon a_{ij}, & \mbox{if }{i \ne j} \end{cases}$$

Then B(1)=A, and B(0) is the diagonal matrix with diagonal entries corresponding to the diagonal entries of A. Therefore, each of the  eigenvalues of B(0) is the center of a Gershgorin circle. So, exactly p of the eigenvalues of B(0) lie in $$D^{(p)} \,\!$$. The eigenvalues of B are the zeros of its characteristic polynomial, which is a polynomial with coefficients that are continuous variables of $$\epsilon$$. Thus, the zeros of the characteristic polynomial are also continuous functions of $$\epsilon$$. As $$\epsilon$$ goes from 0 to 1 the eigenvalues of $$B(\epsilon) \,\!$$ move along continuous paths in the complex plane. At the same time, the radii of the Gershgorin circles increase from 0 to their full radius. Since p of the eigenvalues lie in $$D^{(p)} \,\!$$ when $$\epsilon=0$$ and these disks are disjoint from $$D^{(q)} \,\!$$, the p eigenvalues must still lie in the union of the disks when $$\epsilon=1$$.

8.) Block Matrices
$$ A=\begin{bmatrix} A_{{1,1}_{1,1}} & \cdots & A_{{1,1}_{1,l_{1}}} & A_{{1,2}_{1,1}} & \cdots & A_{{1,2}_{1,l_{2}}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ A_{{1,1}_{m_{1},1 }} & \cdots & A_{{1,1}_{m_{1},l_{1}}} & A_{{1,2}_{m_{1},1}} & \cdots & A_{{1,2}_{m_{1},l_{2}}} \\ A_{{2,1}_{1,1}} & \cdots & A_{{2,1}_{1,l_{1}}} & A_{{2,2}_{1,1}} & \cdots & A_{{2,2}_{1,l_{2}}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ A_{{2,1}_{m_{2},1}} & \cdots & A_{{2,1}_{m_{2},l_{1}}} & A_{{2,2}_{m_{2},1}} & \cdots & A_{{2,2}_{m_{2},l_{2}}} \end{bmatrix} $$

$$ B=\begin{bmatrix} B_{{1,1}_{1,1}} & \cdots & B_{{1,1}_{1,n_{1}}} & B_{{1,2}_{1,1}} & \cdots & B_{{1,2}_{1,n_{2}}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ B_{{1,1}_{l_{1},1 }} & \cdots & B_{{1,1}_{l_{1},n_{1}}} & B_{{1,2}_{l_{1},1}} & \cdots & B_{{1,2}_{l_{1},n_{2}}} \\ B_{{2,1}_{1,1}} & \cdots & B_{{2,1}_{1,n_{1}}} & B_{{2,2}_{1,1}} & \cdots & B_{{2,2}_{1,n_{2}}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ B_{{2,1}_{l_{2},1}} & \cdots & B_{{2,1}_{l_{2},n_{1}}} & B_{{2,2}_{l_{2},1}} & \cdots & B_{{2,2}_{l_{2},n_{2}}} \end{bmatrix} $$

So

$$ AB=\begin{bmatrix} \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,1}_{i,n_{1}}}} & \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,2}_{i,n_{1}}}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,1}_{i,n_{1}}}} & \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,2}_{i,n_{1}}}} \\

\sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,1}_{i,n_{1}}}} & \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,2}_{i,n_{1}}}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,1}_{i,n_{1}}}} & \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,2}_{i,n_{1}}}} \\ \end{bmatrix} $$

Then note:

$$ A_{1,1}B_{1,1}-A_{1,2}B_{2,1}=\begin{bmatrix} \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,1}_{i,n_{1}}}} \\ \vdots & \ddots & \vdots \\ \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,1}_{i,n_{1}}}} \\ \end{bmatrix} $$

$$ A_{2,1}B_{1,1}-A_{2,2}B_{2,1}=\begin{bmatrix} \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,1}_{i,n_{1}}}} \\ \vdots & \ddots & \vdots \\ \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,1}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,1}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,1}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,1}_{i,n_{1}}}} \\ \end{bmatrix} $$

$$ A_{1,1}B_{1,2}-A_{1,2}B_{2,2}=\begin{bmatrix} \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{1,i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{1,i}}*B_{{2,2}_{i,n_{1}}}} \\ \vdots & \ddots & \vdots \\ \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{1,1}_{m_{1},i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{1,2}_{m_{1},i}}*B_{{2,2}_{i,n_{1}}}} \\ \end{bmatrix} $$

$$ A_{2,1}B_{1,2}-A_{2,2}B_{2,2}=\begin{bmatrix} \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{1,i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{1,i}}*B_{{2,2}_{i,n_{1}}}} \\ \vdots & \ddots & \vdots \\ \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,2}_{i,1}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,2}_{i,1}}} & \cdots & \sum_{i=1}^{l_{1}} {A_{{2,1}_{m_{1},i}}*B_{{1,2}_{i,n_{1}}}}+\sum_{i=1}^{l_{2}} {A_{{2,2}_{m_{1},i}}*B_{{2,2}_{i,n_{1}}}} \\ \end{bmatrix} $$

Finally, we arrive at:

$$ AB=\begin{bmatrix} A_{1,1}B_{1,1}-A_{1,2}B_{2,1} & A_{1,1}B_{1,2}-A_{1,2}B_{2,2} \\ A_{2,1}B_{1,1}-A_{2,2}B_{2,1} & A_{2,1}B_{1,2}-A_{2,2}B_{2,2} \\ \end{bmatrix} $$

9.) More on Block Matrices
a.) False! Consider the following submatrices:

$$ A_{1,1}=\begin{bmatrix} 1 & 0 \\ 0 & 0 \\ \end{bmatrix} A_{2,1}=\begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix} A_{1,2}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} A_{2,2}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} $$

Obviously, $$det(A_{1,1})=det(A_{2,1})=0\,\!$$, so $$det(A_{1,1})*det(A_{2,2})-det(A_{1,2})*det(A_{2,1})=0\,\!$$. However, if you consider the whole matrix $$ A=\begin{bmatrix} A_{1,1} & A_{1,2} \\ A_{2,1} & A_{2,2} \\ \end{bmatrix}=\begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\  0 & 0 & 1 & 0 \\  0 & 1 & 0 & 1 \\ \end{bmatrix} $$

Then you see that $$det(A)=-1$$ by expansion along the 3rd row and then the 2nd row.

b.) Also false! Consider the following matrices:

$$ B=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{bmatrix} C=\begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{bmatrix} D=\begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ \end{bmatrix} $$

Then $$ A=\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\  0 & 0 & 1 & 0 & 0 & 0 \\  0 & 0 & 0 & 1 & 0 & 0 \\ \end{bmatrix} $$

Now, $$rank(B)=2\,\!$$, $$rank(D)=1\,\!$$, but $$rank(A)=4\,\!$$

10.) Operation Count
The calculation for the diagonal entries of the Cholesky decomposition is:

$$l_{k,k}=\sqrt{a_{k,k}-\sum_{i=1}^{k-1} {l_{k,i}*l_{k,i}}}$$

The only place in here that multiplications occur is in the sum - one for each term of the summation, for a total number of $$\sum_{i=0}^{n-1}{i}=\frac{(n-1)*(n-2)}{2}$$ multiplications

The calculation for the entries below the diagonal is:

$$l_{i,k}=\frac{a_{i,k}-\sum_{j=1}^{k-1} {l_{i,j}*l_{k,j}}}{l_{k,k}}$$

To determine the number of multiplications here, first consider how many multiplications/divisions occur assuming you are in the kth column. You will always have one division, plus one multiplication from each term of the sum, of which there are k-1. So each entry below the diagonal in the first column will take one multiplication/division, and there are n-1 of these. Each entry in the second column will take two multiplications/divisions, and there are n-2 of them. Continuing onward with the yields this for the total number of multiplications/divisions in the non-diagonal entries: $$\sum_{i=1}^{n-1}{i*(n-i)}=n*\sum_{i=1}^{n-1}{i}-\sum_{i=1}^{n-1}{i^2}=\frac{n*n*(n-1)}{2}-\frac{n*(n+1)*(2*n+1)}{6}=\frac{n^3-6n^2-n}{6}$$

So the total number of multiplications and divisions involved is:

$$\frac{n^3-6n^2-n}{6}+\frac{(n-1)*(n-2)}{2}=\frac{n^3-3n^2-10n+6}{6}$$