User:BRousselet/abouteigenvalues

Collapsible
The proof of ... proceeds as follows:

...

$x=3$ x=5
 * $x=3$ x=5
 * $x=3$ x=5

About eigenvalue perturbation

 * $$\mathbf{K}_0 \delta\mathbf{x}_i+ \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i}\mathbf{M}_0 \delta \mathbf{x}_i + \lambda_{0i}\delta \mathbf{M} \mathrm{x}_{0i} + \delta \lambda_i \mathbf{M}_0\mathbf{x}_{0i}. \qquad(3)$$

we left multiply with $$\mathbf{x}_{0i}^T $$ and use (2) as well as its first order variation $$ \delta\mathbf{x}_j \mathbf{M} \mathbf{x}_{0i} + \mathbf{x}_{0j} \mathbf{M} \delta \mathbf{x}_{i} + \mathbf{x}_{0j} \delta \mathbf{M} \mathbf{x}_{0i}=0$$ get
 * $$ \mathbf{x}_{0i}^T \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^T\delta \mathbf{M} \mathrm{x}_{0i} + \delta \lambda_i

$$ or
 * $$ \delta \lambda_i=\mathbf{x}_{0i}^T \delta \mathbf{K} \mathbf{x}_{0i} -\lambda_{0i} \mathbf{x}_{0i}^T\delta \mathbf{M} \mathrm{x}_{0i}

$$ We notice that it is the first order perturbation of the generalized Rayleigh quotient :$$R(K,M;x_{0i})=x_{0i}^T K x_{0i}/x_{0i}^TMx_{0i}, \text{ with }x_{0i}^TMx_{0i}=1 $$

Eigenvector perturbation
We left multiply (3) with $$ x_{0j}^T $$ for $$ j \neq i $$ and get
 * $$\mathbf{x}_{0j}^T\mathbf{K}_0 \delta\mathbf{x}_i+ \mathbf{x}_{0j}^T \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0j}^T \mathbf{M}_0 \delta \mathbf{x}_i + \lambda_{0i} \mathbf{x}_{0j}^T\delta \mathbf{M} \mathrm{x}_{0i} + \delta \lambda_i \mathbf{x}_{0j}^T\mathbf{M}_0\mathbf{x}_{0i}. $$

We use $$ \mathbf{x}_{0j}^T K=\lambda_{0j} \mathbf{x}_{0j}^TM \text{ and } \mathbf{x}_{0j}^T\mathbf{M}_0\mathbf{x}_{0i}=0. $$ for $$ j \neq i $$.


 * $$\lambda_{0j} \mathbf{x}_{0j}^T\mathbf{M}_0 \delta\mathbf{x}_i+ \mathbf{x}_{0j}^T \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0j}^T \mathbf{M}_0 \delta \mathbf{x}_i + \lambda_{0i} \mathbf{x}_{0j}^T\delta \mathbf{M} \mathrm{x}_{0i} . $$

or
 * $$(\lambda_{0j}-\lambda_{0i}) \mathbf{x}_{0j}^T\mathbf{M}_0 \delta\mathbf{x}_i+ \mathbf{x}_{0j}^T \delta \mathbf{K} \mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0j}^T\delta \mathbf{M} \mathrm{x}_{0i}  . $$

As the eigenvalues are simple, for $$ j \neq i $$
 * $$ \epsilon_{ij}=\mathbf{x}_{0j}^T\mathbf{M}_0 \delta\mathbf{x}_i =\frac{-\mathbf{x}_{0j}^T \delta \mathbf{K} \mathbf{x}_{0i} + \lambda_{0i} \mathbf{x}_{0j}^T\delta \mathbf{M} \mathrm{x}_{0i}}{ (\lambda_{0j}-\lambda_{0i})},  i=1, \dots n; j=1, \dots n; j \neq i. $$

Moreover the firs order variation of ... yields $$ 2 \epsilon_{ii}=2 \mathbf{x}_{0i}^T \mathbf{M} \delta x_i=-\mathbf{x}_{0i}^T \delta M \mathbf{x}_{0i} .$$ We have obtained all the components of $$ \delta x $$.

Perturbation of an implicit function.
In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we  notice that for a continuously differentiable function $$f:\R^{n+m} \to \R^m, \; f: (x,y) \mapsto f(x,y)$$, with an invertible Jacobian  $$ J_{f,b}(x_0,y_0) $$,  from a point $$ (x_0,y_0) $$ solution of  $$f(x_0,y_0)=0 $$, we get solutions of $$f(x,y)=0 $$ with $$ x$$ close to $$ x_0$$ in the form $$ y=g(x)$$ where $$ g$$ is a continuously differentiable function ; moreover the Jacobian of $$ g $$ is provided by the linear system $$ J_{f,y}(x,g(x)) J_{g,x}(x)+J_{f,x}(x,g(x))=0 \quad (1) $$. As soon as the hypothesis of the theorem is satisfied, the Jacobian of $$ g $$ may be computed with a first order expansion of $$ f(x_0+ \delta x, y_0+\delta y)=0 $$, we get

$$ J_{f,x}(x,g(x)) \delta x+ J_{f,y}(x,g(x))\delta y=0 $$; as $$\delta y=J_{g,x}(x) \delta x $$, it is equivalent to equation $$ (1) $$.

Eigenvalue perturbation: theoretical basis
we use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce $$ \tilde{f}: \R^{2n^2}  \times \R^{n+1} \to \R^{n+1}$$, with $$f(K,M, \lambda,x) =Kx -\lambda x, f_{n+1}(M,x)=x^T Mx -1$$. In order to use the Implicit function theorem, we study the invertibility of the Jacobian $$ J_{\tilde{f};\lambda,x} (K,M;\lambda_{0i},x_{0i})$$ with
 * $$ \tilde{f} (K,M, \lambda,x)= \binom{f(K,M,\lambda,x)}{f_{n+1}(x)}$$ with

$$ J_{\tilde{f};\lambda,x} (K,M;\lambda_i,x_i)(\delta \lambda,\delta x)=\binom{-Mx_i}{0} \delta \lambda +\binom{K-\lambda M}{2 x_i^T M} \delta x_i$$. Indeed, the solution of

$$J_{\tilde{f};\lambda_{0i},x_{0i} } (K,M;\lambda_{0i},x_{0i})(\delta \lambda_i,\delta x_i)=$$$$\binom{y}{y_{n+1}} $$ is

$$ \delta \lambda_i= -x_{0i}^T y, \; \text{ and } (\lambda_{0i}-\lambda_{0j})x_{0j}^T M \delta x_i=x_j^T y, j=1, \dots, n, j \neq i\;; $$ $$ \text{ or }x_{0j}^T M \delta x_i=x_j^T y/(\lambda_{0i}-\lambda_{0j}), \text{ and } \; 2x_{0i}^TM \delta x_i=y_{n+1} $$

When $$ \lambda_i$$ is a simple eigenvalue, as the eigenvectors $$x_{0j}, j=1, \dots,n $$ form an orthonormal basis, for any right-hand side, we have obtained one solution therefore,  the Jacobian is invertible.

The implicit function theorem provides a continuously differentiable function $$ (K,M) \mapsto (\lambda_i(K,M), x_i(K,M))$$ hence the expansion with little o notation: $$ \lambda_i=\lambda_{0i}+ \delta \lambda_i +o(\| \delta K \|+\|\delta M \|)$$ $$ x_i=x_{0i}+ \delta x_i +o(\| \delta K \|+\|\delta M \|)$$. with

$$ \delta \lambda_i=\mathbf{x}_{0i}^T \delta \mathbf{K} \mathbf{x}_{0i} -\lambda_{0i} \mathbf{x}_{0i}^T\delta \mathbf{M} \mathrm{x}_{0i};$$ $$ \delta x_i=\mathbf{x}_{0j}^T\mathbf{M}_0 \delta\mathbf{x}_i \mathbf{x}_{0j} \text{ with} $$$$ \mathbf{x}_{0j}^T\mathbf{M}_0 \delta\mathbf{x}_i =\frac{-\mathbf{x}_{0j}^T \delta \mathbf{K} \mathbf{x}_{0i} +  \lambda_{0i} \mathbf{x}_{0j}^T\delta \mathbf{M} \mathrm{x}_{0i}}{ (\lambda_{0j}-\lambda_{0i})},  i=1, \dots n; j=1, \dots n; j \neq i.$$

Eigenvalue sensitivity, a small example
A simple case is $$K=\begin{bmatrix} 2 & b \\ b & 0 \end{bmatrix}$$; however you can compute eigenvalues and eigenvectors with the help of online tools such as  (see introduction in Wikipedia  WIMS) or using Sage SageMath. You get the smallest eigenvalue $$\lambda=- \left [\sqrt{ b^2+1} +1 \right]$$ and an explicit computation $$\frac{\partial \lambda}{\partial b}=\frac{-x}{\sqrt{x^2+1}}$$; more over, an associated  eigenvector is $$\tilde x_0=[x,-(\sqrt{x^2+1}+1))]^T$$; it is not an unitary vector; so $$x_{01}x_{02} = \tilde x_{01} \tilde x_{02}/\| \tilde x_0 \|^2$$; we get $$\| \tilde x_0 \|^2=2 \sqrt{x^2+1}(\sqrt{x^2+1}+1)$$ and $$\tilde x_{01} \tilde x_{02} =-x (\sqrt{x^2+1}+1)$$ ; hence $$x_{01} x_{02}=-\frac{x}{2 \sqrt{x^2+1}}$$; for  this example, we have checked  that $$\frac{\partial \lambda}{\partial b}= 2x_{01} x_{02}$$ or $$ \delta \lambda=2x_{01} x_{02} \delta b$$.

Divers
$$\begin{align} \mathbf{K}_0\mathbf{x}_{0i} &+ \delta \mathbf{K}\mathbf{x}_{0i} + \mathbf{K}_0\delta \mathbf{x}_i + \delta \mathbf{K}\delta \mathbf{x}_i = \\[6pt] & \lambda_{0i}\mathbf{M}_0\mathbf{x}_{0i}+\lambda_{0i}\mathbf{M}_0\delta\mathbf{x}_i + \lambda_{0i} \delta \mathbf{M} \mathbf{x}_{0i} +\delta\lambda_i\mathbf{M}_0\mathbf{x}_{0i} \\ [12pt] & + \lambda_{0i} \delta \mathbf{M} \delta\mathbf{x}_i + \delta\lambda_i \delta \mathbf{M}\mathbf{x}_{0i} + \delta\lambda_i\mathbf{M}_0\delta\mathbf{x}_i + \delta\lambda_i \delta \mathbf{M} \delta\mathbf{x}_i. \end{align}$$



Example 2
Suppose $$[K_0]$$ is the 2x2 identity matrix, any vector is an eigenvector; then $$u_0=[1, 1]^T/\sqrt{2}$$ is one possible eigenvector. But if one makes a small perturbation, such as

$$[K] = [K_0] + \begin{bmatrix}\epsilon & 0 \\0 & 0 \end{bmatrix} $$

Then the eigenvectors are $$v_1=[1, 0]^T$$ and $$v_2=[0, 1]^T$$; they are constant with respect to $$ \epsilon $$ so that $$ \|u_0-v_1 \| $$ is constant and does not go to zero.