User:Eml5526.s11.team2.oztekin/HW2

= Problem 2.2=

Given
$$\left\{ {{\underline{a}}_{i}},i=1,2,......n \right\}$$ is a orthonormal basis.

i.e. $${{\underline{a}}_{i}}.{{\underline{a}}_{j}}={{\delta }_{ij}}$$

$$\left[ {{b}_{jk}} \right]=\left( \begin{matrix}  1 & 1 & 1  \\   2 & -1 & 3  \\   3 & 2 & 6  \\ \end{matrix} \right)$$

Vectors $$ \displaystyle {{\underline{b}}_{j}}$$ are basis vectors and we cen express this basis vectors in terms of orthonormal basis vectors. These basis vectors were represented in meeting 7-3.

$${{\underline{b}}_{j}}={{\underline{b}}_{jk}}{{\underline{a}}_{k}}$$

i.e.

$$\begin{align} & {{\underline{b}}_{1}}={{\underline{a}}_{1}}+{{\underline{a}}_{2}}+{{\underline{a}}_{3}} \\ & {{\underline{b}}_{2}}=2{{\underline{a}}_{1}}-{{\underline{a}}_{2}}+3{{\underline{a}}_{3}} \\ & {{\underline{b}}_{3}}=3{{\underline{a}}_{1}}+2{{\underline{a}}_{2}}+6{{\underline{a}}_{3}} \\ \end{align}$$

also

$$\underline{v}=5{{\underline{a}}_{1}}-7{{\underline{a}}_{2}}-4{{\underline{a}}_{3}}$$

Find
1) Find $$\left[ {{b}_{jk}} \right]$$

2) Find $$\underline{\Gamma }\left( {{\underline{b}}_{1}},{{\underline{b}}_{2}},{{\underline{b}}_{3}} \right)$$, which is also equal to $$\underline{K}$$  also find $$\text{det}\left[ \underline{\Gamma } \right]$$

3) Solve $$\underline{F}=\left\{ {{F}_{i}} \right\}=\left\{ {{\underline{b}}_{i}}.\underline{v} \right\}$$

4) Solve $$\underline K \underline d = \underline F $$ for $$d = \left[  \right]$$

5) Use $${\underline w _i}.\underline{\underline P} \left( {\underline v } \right) = 0$$ $$\forall i = 1,2,.......n$$ to find $$ \underline {\overline K } \underline d = \underline {\overline F } $$. What will be $$\underline {\overline K } $$ and $$\underline {\overline F } $$ for this case.

6) Solve for $$\underline d $$ for this case and compare to the value calculated in 4).

7) Observe the symmetric properties of $$\underline K $$ and $$\underline {\overline K } $$. Discuss the pros and cons of two methods.

Background Theory
$$\left\{ {{\underline{b}}_{i}},i=1,2.....n \right\}$$ is a basis for $${{\mathbb{R}}^{n}}$$ not rectilinear orthonormal.

$$\underline{v}$$ is any vector,such that


 * {| style="width:100%" border="0"

$$  \displaystyle \underline v = \sum\limits_{j = 1}^n $$     (2.1)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \sum\limits_{j = 1}^n {{{\underline b }_j}{v_j} - \underline v = 0} $$     (2.2)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{\underline P} \left( {\underline v } \right) = 0 $$     (2.3)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{\underline P} \left( v \right) = \sum\limits_{j = 1}^n {{{\underline b }_j}{v_j} - \underline v } $$     (2.4)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Successively multiply by $${\underline b _i}\left( {i = 1,2,......n} \right)$$ equations with n unknowns are formed


 * {| style="width:100%" border="0"

$$  \displaystyle {\underline b _i}.\underline{\underline P} \left( {\underline v } \right) = 0 $$     (2.5)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle {\underline b _i}\sum\limits_{j = 1}^n  = {\underline b _i}.\underline v $$ (2.6)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \sum\limits_{j = 1}^n {{{\underline b }_i}.{{\underline b }_j}{v_j}} = {\underline b _i}.\underline v $$ (2.7)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle {\left[ \right]_{n \times n}}{\left[  \right]_{n \times 1}} = {\left[  \right]_{n \times 1}} $$     (2.8)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline K \underline d = \underline F $$ (2.9)
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * 
 * }

Solution
1) Using MATLAB to find determinant. $$\det \left[ \right] = -8$$

2) Elements of Stiffness matrix $$\underline K $$ are given by equation number (7),(8) & (9) and shown in following matrix.
 * {| style="width:100%" border="0"

$$  \displaystyle \underline K = \left( {\begin{array}{ccccccccccccccc}  {{{\underline b }_1}.{{\underline b }_1}}&{{{\underline b }_1}.{{\underline b }_2}}&{{{\underline b }_1}.{{\underline b }_3}} \\   {{{\underline b }_2}.{{\underline b }_1}}&{{{\underline b }_2}.\underline  }&{{{\underline b }_2}.{{\underline b }_3}} \\   {{{\underline b }_3}.{{\underline b }_1}}&{{{\underline b }_3}.{{\underline b }_2}}&{{{\underline b }_3}.{{\underline b }_3}} \end{array}} \right) $$     (2.10)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline K = \left( {\begin{array}{ccccccccccccccc}  3&4&{11} \\   4&{14}&{22} \\   {11}&{22}&{49} \end{array}} \right) $$
 * style="width:95%" |
 * style="width:95%" |
 * }
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \det \left( {\underline K } \right) = 64 $$     (2.11)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

3) As shown in equation numbers (7) & (8).

$$\underline F = \left\{  \right\} = \left\{ {{{\underline b }_i}.\underline v } \right\}$$


 * {| style="width:100%" border="0"

$$  \displaystyle \left( {\begin{array}{ccccccccccccccc}  \\    \\ \end{array}} \right) = \left( {\begin{array}{ccccccccccccccc}  {{{\underline b }_1}.\underline v } \\   {{{\underline b }_2}.\underline v } \\   {{{\underline b }_3}.\underline v } \end{array}} \right) $$     (2.12)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \left( {\begin{array}{ccccccccccccccc}  \\    \\ \end{array}} \right) = \left( {\begin{array}{ccccccccccccccc}  { - 6} \\   5 \\   { - 23} \end{array}} \right) $$
 * style="width:95%" |
 * style="width:95%" |
 * }
 * }

4) To find the value of $$\underline d$$


 * {| style="width:100%" border="0"

$$  {\underline K ^{ - 1}} = \left( {\begin{array}{ccccccccccccccc}  {3.1562}&{0.7187}&{ - 1.0312} \\   {0.7187}&{0.4062}&{ - 0.3437} \\   { - 1.0312}&{ - 0.3437}&{0.4062} \end{array}} \right) $$
 * style="width:95%" |
 * style="width:95%" |
 * }
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline d = {\underline K ^{ - 1}}\underline F $$ (2.13)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline d = \left( {\begin{array}{ccccccccccccccc}  {8.375} \\   {5.625} \\   { - 4.875} \end{array}} \right) $$
 * style="width:95%" |
 * style="width:95%" |
 * }
 * }

5) Derivation for $$\underline {\overline K } \underline d = \underline {\overline F } $$ where $$\left\{ {{{\underline w }_i},i = 1,2.....n} \right\}$$ is a line independent family of vectors. Consider $${w_i} = {a_i}$$$$\left\{ {{{\underline a }_i},i = 1,2,......n} \right\}$$ is an orthonormal basis.


 * {| style="width:100%" border="0"

$$  \displaystyle {\underline w _i}.\underline{\underline P} \left( {\underline v } \right) = 0 $$     (2.14)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Putting value of $$\underline{\underline{P}}\left( v \right)$$ from equation number (4).


 * {| style="width:100%" border="0"

$$  \displaystyle {\underline a _i}\sum\limits_{j = 1}^n  = {\underline a _i}.\underline v $$ (2.15)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \sum\limits_{j = 1}^n {{{\underline a }_i}.{{\underline b }_j}{v_j}} = {\underline a _i}.\underline v $$ (2.16)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle {\left[ \right]_{n \times n}}{\left[  \right]_{n \times 1}} = {\left[  \right]_{n \times 1}} $$     (2.17)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline {\overline K } \underline d = \underline {\overline F } $$     (2.18)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Where $$\underline{\overline{K}}$$ and $$\underline{\overline{F}}$$ are matrices having following elements.


 * {| style="width:100%" border="0"

$$  \displaystyle \underline {\overline K } = \left( {\begin{array}{ccccccccccccccc}  {{{\underline a }_1}.{{\underline b }_1}}&{{{\underline a }_1}.{{\underline b }_2}}&{{a_1}.{{\underline b }_3}} \\   {{{\underline a }_2}.{{\underline b }_1}}&{{{\underline a }_2}.\underline  }&{{a_2}.{{\underline b }_3}} \\   {{{\underline a }_3}.{{\underline b }_1}}&{{{\underline a }_3}.{{\underline b }_2}}&{{{\underline a }_3}.{{\underline b }_3}} \end{array}} \right) $$     (2.19)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \overline {\underline F } = \left( {\begin{array}{ccccccccccccccc}  {{{\underline a }_1}.\underline v } \\   {{{\underline a }_2}.\underline v } \\   {{{\underline a }_3}.\underline v } \end{array}} \right) $$     (2.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline {\overline K } = \left( {\begin{array}{ccccccccccccccc}  1&2&3 \\   1&{ - 1}&2 \\   1&3&6 \end{array}} \right) $$
 * style="width:95%" |
 * style="width:95%" |
 * }
 * }


 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\overline{\underline{F}}=\left( \begin{matrix}  5  \\   -7  \\   -4  \\ \end{matrix} \right)$$


 * }
 * }

6) To find $$\underline d $$


 * {| style="width:100%" border="0"

$$  \displaystyle \underline d = {\underline {\overline K } ^{ - 1}}\underline {\overline F } $$     (2.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$${{\overline{\underline{K}}}^{-1}}=\left( \begin{matrix}  1.500 & 0.375 & -0.875  \\   0.500 & -0.375 & -0.125  \\   -0.500 & -0.125 & 0.375  \\ \end{matrix} \right)$$


 * }
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline d = \left( {\begin{array}{ccccccccccccccc}  {8.375} \\   {5.625} \\   { - 4.875} \end{array}} \right) $$
 * style="width:95%" |
 * style="width:95%" |
 * }
 * }

The value of $$\underline d$$ is same irrespective of method adopted to calculate it.

7) Pros & Cons of Bubnov-Galerkin method

Pros:-
 * The matrix $$\underline{K}$$ is a symmetric matrix.


 * Stiffness matrix is same with gram matrix so by this way we can observe linear independency of our basis vectors.

Cons:-
 * Calculations become more involved.

Pros & Cons of Petrov-Galerkin method

Pros:-
 * The matrix $$\underline{\overline{K}}$$ is not a symmetric matrix.


 * It is clearly seen that the tranpose of basis vectors is equal to Petrov-Galerkin stiffness matrix. $${{\left[ {{b}_{jk}} \right]}^{T}}=\underline{\overline{K}}$$ So it is easy to set up stiffness matrix in this method.

Cons:-


 * In additon to solution we have to construct gram matrix if are not sure about linear independency of our basis vectors.

=Problem 2.3:=

Given
The equivalency validation was asked in meeting 8.3. as a second case where there are orthonormal vectors. First case as given:
 * {| style="width:100%" border="0"

$$  \displaystyle \begin{align} &\underline{w}=\sum\limits_{i} &\forall \left\{ {{\alpha }_{1}},......,{{\alpha }_{n}} \right\}\in {{R}^{n}} \end{align} $$     (3.1)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Second case:
 * {| style="width:100%" border="0"

$$ \displaystyle \begin{align} &\underline{w}=\sum\limits_{i} &\forall \left\{ {{b}_{1}},.....,{{b}_{n}} \right\}\in {{R}^{n}} \end{align} $$     (3.2)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Where $$ \displaystyle a_{i} $$'s are orthonormal basis functions.We can identify orthonormal basis functions as,


 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{a}}_{i}}.{{\underline{a}}_{j}}={{\delta }_{ij}} $$     (3.3)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \delta_{ij} =\left\{ \begin{matrix} \begin{matrix} \begin{matrix} 1 & for \\ \end{matrix} & i=j \\ \end{matrix} \\ \begin{matrix} \begin{matrix} 0 & for \\ \end{matrix} & i\ne j \\ \end{matrix} \\ \end{matrix} \right.
 * style="width:95%" |
 * style="width:95%" |

$$     (3.4)
 * <p style="text-align:right">
 * }

The operator of $$\displaystyle \underline{\underline{P}}(\underline{v})$$ was defined as Eq(4) in meeting 7-2. This operator would lead us to conclude with stiffness matrix,unknown matrix and force matrix.
 * {| style="width:100%" border="0"

$$  \displaystyle \underline{\underline{P}}(\underline{v}):=\sum\limits_{j=1}^{n}{{{\underline{b}}_{j}}{{v}_{j}}-\underline{v}=0} $$     (3.5)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{b}}_{i}}.\underline{\underline{P}}(\underline{v})=0 $$     (3.6)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{b}}_{i}}.\sum\limits_{j}{{{\underline{b}}_{j}}{{v}_{j}}={{\underline{b}}_{i}}.\underline{v}} $$     (3.7)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\left[ {{K}_{ij}} \right]}_{n*n}}{{\left\{ {{v}_{j}} \right\}}_{n*1}}={{\left\{ {{F}_{i}} \right\}}_{n*1}} $$     (3.8)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Find
Show $$ \displaystyle \underline{w}.\underline{\underline{P}}(\underline{v})=0$$ is equivalent to $$\displaystyle {{\underline{a}}_{i}}\underline{\underline{P}}(\underline{v})=0 $$

Solution
For i=1,..........n. We have n equations and n unknowns.Since the components of vector w are arbitrary we can decide what they are as our convenience in order to proof.

Choice 1 :$$ \displaystyle \beta _{1}=1,\beta _{2}=......=\beta _{n}=0$$ then we can observe ;


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{w}={{\underline{a}}_{1}} $$     (3.9)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ b_{1},....,b_{n} \right \}=\left \{ \alpha _{1},......,\alpha _{n} \right \}\Rightarrow {{\underline{a}}_{1}} ={{\underline{b}}_{1}}
 * style="width:95%" |
 * style="width:95%" |

$$     (3.10)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \underline{a}{}_{1}.\underline{\underline{P}}(\underline{v})=0 $$     (3.11) Choice 2 :$$ \displaystyle \left \{ \beta _{1} ,.......,\beta _{n}\right \}=\left \{ 0,1,0,......,0 \right \}$$ then ;
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{w}={{\underline{a}}_{2}} $$     (3.12)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ b_{1},....,b_{n} \right \}=\left \{ \alpha _{1},......,\alpha _{n} \right \}\Rightarrow {{\underline{a}}_{2}} ={{\underline{b}}_{2}}
 * style="width:95%" |
 * style="width:95%" |

$$     (3.13)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{a}}_{2}}.\underline{\underline{P}}(\underline{v})=0 $$     (3.14)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \begin{matrix} . \\   .  \\   .  \\ \end{matrix}
 * style="width:95%" |
 * style="width:95%" |

$$
 * }

Choice 3 : $$ \displaystyle \left \{ \beta _{1},....,\beta _{n} \right \}=\left \{ 0,.....0,1 \right \} $$


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{w}={{\underline{a}}_{n}} $$     (3.15)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ b_{1},....,b_{n} \right \}=\left \{ \alpha _{1},......,\alpha _{n} \right \}\Rightarrow {{\underline{a}}_{n}} ={{\underline{b}}_{n}}
 * style="width:95%" |
 * style="width:95%" |

$$     (3.16)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{a}}_{n}}.\underline{\underline{P}}(\underline{v})=0 $$     (3.17) So we can say that by selecting proper components for vector w we can conclude same equation which was given Eq(3.1).Actually Eq(3.1) is more  general casse than Eq(3.2). Also we can broaden combinations of components of vector w.For example;
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ \beta _{1},....,\beta _{n} \right \}=\left \{ 1,1,1,0,...,0 \right \} $$     (3.18)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{w}}_{1}}={{\beta }_{1}}{{\underline{a}}_{1}}+{{\beta }_{2}}{{\underline{a}}_{2}}+{{\beta }_{3}}{{\underline{a}}_{3}}$$ (3.19)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{w}}_{1}}={{\underline{b}}_{1}} $$     (3.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{b}}_{1}}={{\beta }_{1}}{{\underline{a}}_{1}}+{{\beta }_{2}}{{\underline{a}}_{2}}+{{\beta }_{3}}{{\underline{a}}_{3}}$$ (3.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

This is 'nearly' the same vector for basis $$\displaystyle b_{n}$$. We can validate $$\displaystyle {{\underline{b}}_{n}}\underline{\underline{P}}(\underline{v})=0$$. But after knowing all basis vectors of vector w, then we have to check if their gram matrix is not equal to zero. Then we say that our new vectors are new arbitrary 'linerly independent' family.

Expressing vector b in terms of orthonormal vectors a is exactly same with the case presented in meeting 8-2.Because in this case we had validated that any linearly independent fuctions scalar product with $$\displaystyle \underline{\underline{P}}(\underline{v})$$ is equal zero. Therefore we can always express linearly independent basis 'vector b' with our orthonormal vectors.

Eml5526.s11.team2.oztekin (talk) 00:01, 2 February 2011 (UTC) =Problem 2.7:Determining orthogonality of family of functions=

Given
Problem was satated in meeting 10-3. . Our family of equations are :


 * {| style="width:100%" border="0"

$$  \displaystyle F=\left \{ 1,x,x^{2},x^{3},x^{4} \right \} $$     (7.1)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Test of linear independency of family of equations can be done by setting up 'Gram Matrix'.Gram Matrix If the linear independency exists between functions of the family their gram matrix's determinant must be non-zero.


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma } \left ( b_{1}(x),.....,b_{n}(x) \right )=\left [< b_{i},b_{j} >\right ]_{n\times n} $$ (7.2)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma_{ij}}=<b_{i},b_{j}>=\int_{\Omega }b_{i}(x)b_{j}(x)dx $$     (7.3)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

We can identify our basis functions as :


 * {| style="width:100%" border="0"

$$  \displaystyle b_{1}(x)=1 $$     (7.4)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{2}(x)=x $$     (7.5)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{3}(x)=x^{2} $$     (7.6)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{4}(x)=x^{3} $$     (7.7)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{5}(x)=x^{4} $$     (7.8)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Find
Find if the family of these basis functions are orthogonal over the domain $$\displaystyle\Omega =\left [ 0,1 \right ] $$

Solution
First we have to set up our gram matrix.


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma } =\begin{bmatrix} b_{11} & b_{12} & b_{13} & b_{14} &b_{15} \\ b_{21} & b_{22} & b_{23} & b_{24} &b_{25} \\ b_{31} & b_{32} & b_{33} & b_{34} &b_{35} \\ b_{41} &b_{42} & b_{43} & b_{44} &b_{45} \\ b_{51} &b_{52} & b_{53} &b_{54}  & b_{55} \end{bmatrix} $$     (7.9)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Now we can look at the components of this matrix. Since 'scalar product' is symmetric the components which are symmetric about the diagonal of this matrix will be same each other.


 * {| style="width:100%" border="0"

$$  \displaystyle b_{11}=<b_{1},b_{1}>=\int_{0}^{1}1.1dx=1 $$     (7.10)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{22}=<b_{2},b_{2}>=\int_{0}^{1}x.xdx=\frac{1}{3} $$     (7.11)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{33}=<b_{3},b_{3}>=\int_{0}^{1}x^{2}x^{2}dx=\frac{1}{5} $$     (7.12)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{44}=<b_{4},b_{4}>=\int_{0}^{1}x^{3}x^{3}dx=\frac{1}{7} $$     (7.13)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{55}=<b_{5},b_{5}>=\int_{0}^{1}x^{4}x^{4}dx=\frac{1}{9} $$     (7.14)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{12}=b_{21}=<b_{1},b_{2}>=\int_{0}^{1}xdx=\frac{1}{2} $$     (7.15)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{13}=b_{31}=<b_{1},b_{3}>=\int_{0}^{1}x^{2}dx=\frac{1}{3} $$     (7.16)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{14}=b_{41}=<b_{1},b_{4}>=\int_{0}^{1}x^{3}dx=\frac{1}{4} $$     (7.17)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{15}=b_{51}=<b_{1},b_{5}>=\int_{0}^{1}x^{4}dx=\frac{1}{5} $$     (7.18)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{23}=b_{32}=<b_{2},b_{3}>=\int_{0}^{1}x.x^{2}dx=\frac{1}{4} $$     (7.19)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{24}=b_{42}=<b_{2},b_{4}>=\int_{0}^{1}x.x^{3}dx=\frac{1}{5} $$     (7.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{34}=b_{43}=<b_{3},b_{4}>=\int_{0}^{1}x^{2}.x^{3}dx=\frac{1}{6} $$     (7.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{54}=b_{45}=<b_{3},b_{4}>=\int_{0}^{1}x^{3}.x^{4}dx=\frac{1}{8} $$     (7.22)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{52}=b_{25}=<b_{2},b_{5}>=\int_{0}^{1}x.x^{4}dx=\frac{1}{6} $$     (7.23)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{53}=b_{35}=<b_{3},b_{5}>=\int_{0}^{1}x^{2}.x^{4}dx=\frac{1}{7} $$     (7.24)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

We can conlude with :


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma }=\begin{bmatrix} 1 & \frac{1}{2} & \frac{1}{3} &\frac{1}{4} &\frac{1}{5} \\ \frac{1}{2}& \frac{1}{3} & \frac{1}{4} &\frac{1}{5} &\frac{1}{6} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6} &\frac{1}{7} \\ \frac{1}{4}& \frac{1}{5} & \frac{1}{6} & \frac{1}{7} & \frac{1}{8}\\ \frac{1}{5} & \frac{1}{6}&\frac{1}{7}  & \frac{1}{8} & \frac{1}{9} \end{bmatrix} $$     (7.25)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

We can easily take the determinant of gram matrix by using a solver. Matlab Mathworks gives the solution of determinant as :$$\displaystyle 3.7493e(-12)$$ The gram matrix is 'symmetric' so its transpoze will be same with itself but for orthogonality matrix transpoze has to be same as its inverse.The inverse of gram matrix can easly get from matlab as:


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma ^{-1}}= 1.0e0.005\times \left[ \begin{matrix} \begin{matrix} \begin{matrix} \begin{matrix} 0,0002 \\   -0,003  \\ \end{matrix}  \\ 0,0105 \\   -0,014  \\   0,0063  \\ \end{matrix} & \begin{matrix} \begin{matrix} -0,003 \\   0,0480  \\ \end{matrix}  \\ -0,189 \\   0,2688  \\   -0,126  \\ \end{matrix}  \\ \end{matrix} & \begin{matrix} \begin{matrix} 0,0105 \\   -0,189  \\ \end{matrix}  \\ 0,7938 \\   -1,176  \\   0,5670  \\ \end{matrix} & \begin{matrix} \begin{matrix} -0,014 \\   0,2688  \\ \end{matrix}  \\ -1,176 \\   1,792  \\   -0,882  \\ \end{matrix} & \begin{matrix} \begin{matrix} 0,0063 \\   -0,126  \\ \end{matrix}  \\ 0,5670 \\   -0,8820  \\   0,4410  \\ \end{matrix}  \\ \end{matrix} \right]
 * style="width:95%" |
 * style="width:95%" |

$$     (7.26)
 * <p style="text-align:right">
 * }

The inverse of the matrix is not similar with its transpoze. So it is not orthogonal. Eml5526.s11.team2.oztekin (talk) 00:01, 2 February 2011 (UTC)

= References =