User:Cloudmichael

The first introduction to vectors is as something having magnitude and direction. The general definition is given as a set of axioms that subsumes that original one. In a first course in linear algebra, theorems demonstrate that a vector has a dimension, and the vector may be manifested as a linear combination of linearly independant vectors. Matrices are introduced to transform the representation of a vector, and the matrix product is defined consistent with functional composition, in order to transform sequentially between sequential representations, which includes multiplying components from a field times an ordered n-tuple of linearly independant vectors to result in a linear combination. A vector space over a field is the set of linear combinations of all components from the field with a full set of linearly independant vectors generating that vector space. It is easy to see that the complex plane using the set of linearly independant vectors (1,i), is a vector space over the field of real numbers. Usually, a set of linearly independant vectors (let’s call the base vecors for brevity) are considered abstractly; but let’s consider some concrete examples. 1 and i satisfy the following multiplication (Cayley) table:

Similarly, the base vectors:

\mathbf{I}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} , \mathbf{J}=\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} $$

under ordinary matrix multiplication, $$\mathbf{I}$$ and $$\mathbf{J}$$ satisfy the following Cayley table:

Clearly, there is a one-to-one correspondence between the elements in these two spaces and between the mmappings produced by the binary operations defined in the spaces. These are simple examples of algebras and we have just seen what is termed homomorphisms (when a field, F, is specified the algebra is often termed an F-algebra).

Now that we are thinking of base vectors as matrices, consider:

\mathbf{i}=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}  , \mathbf{j}=\begin{bmatrix} 0 & 1 \\ 1 & 0  \end{bmatrix}  , \mathbf{k}=\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} $$

Olacapato, the tallest locality of Argentina (4090msnm), may be found on the routes of the Railroad General Manuel Belgrano of the branch of the Train of the clouds, next to national route Nº 51, to 2.5 km to the north of Olacapato Chico and to 45 km to the west of San Antonio de los Cobres (of the Coppers).

The previous census of 1991 indicated a population of 186 inhabitants (INDEC, 2001), appearing as a rural dispersed population. Postal code:  A4413

The Cassano Particular Solution Formula
Concerning second order linear ordinary differential equations, it has long been well known that
 * $$ y = e^{\int s \, dx } \Rightarrow y'' + Py' + \left ( -s' -s^2 -sP \right ) y = 0 . $$

So, if $$ y_h $$ is a solution of: $$ y'' + Py' + Qy = 0 $$, then $$ \exists s = {y_h' \over y_h} $$ such that: $$ Q = -s' - s^2 - sP. $$

So, if $$ y_h $$ is a solution of: $$ y + Py' + Qy = 0 $$ ; then a particular solution $$ y_p $$ of $$ y + Py' + Qy = W $$, is given by the Cassano particular solution formula:
 * $$ y_p = y_h \int { \left ( { 1 \over y_h^2} \int W y_h e^{\int P \,dx \,} \, dx \right ) e^{- \int P \, dx } \, dx }$$.

Concerning second order linear ordinary differential equations, it is well known that $$ y = e^{\int s \, dx } \Rightarrow y'' + Py' + \left ( -s' -s^2 -sP \right ) y = 0. $$ So, if $$ y_h $$ is a solution of: $$ y'' + Py' + Qy = 0 $$, then $$ \exists s = {y_h' \over y_h} $$ such that: $$ Q = -s' - s^2 - sP. $$ So, if $$ y_h $$ is a solution of: $$ y + Py' + Qy = 0 $$ ; then a particular solution $$ y_p $$ of $$ y + Py' + Qy = W $$, is given by: $$ y_p = y_h \int { \left ( { 1 \over y_h^2} \int W y_h e^{\int P \,dx \,} \, dx \right ) e^{- \int P \, dx } \, dx }$$.


 * Lincoln, Abraham; Grant, U. S.; & Davis, Jefferson (1861). Resolving Family Differences Peacefully (3rd ed.). Gettysburg: Printing Press. ISBN 0-12-345678-9.

The inhomogenous four-vector Klein-Gordon equation is simply the Klein-Gordon equation in each of the four-vector components - in the presence of current densities. When these components are complex (or doublets) it esentially has eight degrees of freedom (dimensions)(although since the trivial solution is always a solution, extra dimensions are not necessarily imposed. Analogously to Maxwell's equations (and the inhomogeneous four-vector wave equation), the four-vector inhomogeneous Klein-Gordon equation may be written as a matrix product as follows:



(\Box + \mu^2) \psi = 0, $$
 * $$\mathbf{j^\mu = \left ( \Box - {\lVert m \rVert}^2 \right ) A^\mu = }

\begin{pmatrix} {\partial}_{0} & {\partial}_{3} & {-\partial}_{2} & {\partial}_{1} \\ {-\partial}_{3} & {\partial}_{0} & {\partial}_{1} & {\partial}_{2} \\ {\partial}_{2} & {-\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} {\partial}_{0} & {-\partial}_{3} & {\partial}_{2} & {\partial}_{1} \\ {\partial}_{3} & {\partial}_{0} & {-\partial}_{1} & {\partial}_{2} \\ {-\partial}_{2} & {\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} A^1 \\ A^2 \\ A^3 \\ A^0 \end{pmatrix}. $$

$$ \mathbf{ = } \begin{pmatrix} {\partial}_{0} & {-\partial}_{3} & {\partial}_{2} & {\partial}_{1} \\ {\partial}_{3} & {\partial}_{0} & {-\partial}_{1} & {\partial}_{2} \\ {-\partial}_{2} & {\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} {\partial}_{0} & {\partial}_{3} & {-\partial}_{2} & {\partial}_{1} \\ {-\partial}_{3} & {\partial}_{0} & {\partial}_{1} & {\partial}_{2} \\ {\partial}_{2} & {-\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} A^1 \\ A^2 \\ A^3 \\ A^0 \end{pmatrix}. $$

The four-vector wave equation may be written as a matrix product. This thus, when operated on a four-vector, gives a matrix product definition for the d'Alembertian operator. The form may, of course, vary by respective columns and rows, as well as transformations, but one form is:
 * $$\mathbf{j^\mu = \Box A^\mu = }

\begin{pmatrix} {\partial}_{0} & {\partial}_{3} & {-\partial}_{2} & {\partial}_{1} \\ {-\partial}_{3} & {\partial}_{0} & {\partial}_{1} & {\partial}_{2} \\ {\partial}_{2} & {-\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} {\partial}_{0} & {-\partial}_{3} & {\partial}_{2} & {\partial}_{1} \\ {\partial}_{3} & {\partial}_{0} & {-\partial}_{1} & {\partial}_{2} \\ {-\partial}_{2} & {\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} A^1 \\ A^2 \\ A^3 \\ A^0 \end{pmatrix}. $$

$$ \mathbf{ = } \begin{pmatrix} {\partial}_{0} & {-\partial}_{3} & {\partial}_{2} & {\partial}_{1} \\ {\partial}_{3} & {\partial}_{0} & {-\partial}_{1} & {\partial}_{2} \\ {-\partial}_{2} & {\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} {\partial}_{0} & {\partial}_{3} & {-\partial}_{2} & {\partial}_{1} \\ {-\partial}_{3} & {\partial}_{0} & {\partial}_{1} & {\partial}_{2} \\ {\partial}_{2} & {-\partial}_{1} & {\partial}_{0} & {\partial}_{3} \\ {\partial}_{1} & {\partial}_{2} & {\partial}_{3} & {-\partial}_{0} \end{pmatrix}

\begin{pmatrix} A^1 \\ A^2 \\ A^3 \\ A^0 \end{pmatrix}. $$

identity subspaces

For $$\mathbb{R}$$-algebra $$\left ( \left ( \mathbb{R}, \circledast ,\mathbf{V} \right )_{v}, \circ \right )_{a} , \mathbf{V}$$  has an identity subspace whenever there exists: a Left Handed Identity Element $$\left ( \mathbf{I}_{L} \right )$$, a Right Handed Identity Element $$\left ( \mathbf{I}_{R} \right )$$, or      a Single Identity Element $$\left ( \mathbf{I}_{B} \right )$$: $$span \left ( \mathbf{I}_{L} \right ), span \left ( \mathbf{I}_{R} \right ) , span \left ( \mathbf{I}_{B} \right ) $$, respectively. Whenever this identity subspace exists, it is denoted: $$\mathbf{S_{t}}$$. Note: $$\mathbf{I}_{X} $$ is used to represent an Identity Element of any type.

For $$\mathbb{R}$$-algebra $$\left ( \left ( \mathbb{R}, \circledast ,\mathbf{V} \right )_{v}, \circ \right )_{a} , \mathbf{V}$$  has an identity subspace , and $$\mathbf{A} \equiv \mathbf{A}_{t} + \mathbf{A}_{s} \in{\mathbf{V}} $$ , where: $$\mathbf{A}_{t} \in{span \left ( \mathbf{I}_{X} \right ) }, \mathbf{A}_{s} \in{\mathbf{V} \backslash span \left ( \mathbf{I}_{X} \right ) } $$ ; a conjugate ( ∗ ) of $$\mathbf{A} $$ may be defined by: $$\mathbf{A}^{\ast} \equiv \mathbf{A}_{t} - \mathbf{A}_{s} $$ Clearly: $$\mathbf{A}_{t}, \mathbf{A}_{s} $$ are linearly independent, and, $$\mathbf{A}_{s} $$ may be expressed as a linear combination of linearly independent vectors, not including $$\mathbf{I}_{X} $$. So: $$\mathbf{A}_{s} \equiv \tfrac{1}{2} \circledast \left ( \mathbf{A} - \mathbf{A}^{\ast} \right ) \in{\mathbf{V}} $$ ,

$$\mathbf{A}_{t} \equiv \tfrac{1}{2} \circledast \left ( \mathbf{A} + \mathbf{A}^{\ast} \right ) \in{\mathbf{V}} $$.

For $$\mathbb{R}$$-algebra $$\left ( \left ( \mathbb{R}, \circledast ,\mathbf{V} \right )_{v}, \circ \right )_{a} $$ , if $$ \mathbf{V}$$ has an identity subspace , given by $$\mathbf{V}_{t} $$ , the set of all vectors within $$ \mathbf{V} $$ expressible as a linear combination of vectors not including $$\mathbf{I}_{X} $$, is denoted: $$\mathbf{V}_{s} $$.

For $$\mathbb{R}$$-algebra $$\left ( \left ( \mathbb{R}, \circledast ,\mathbf{V} )\right )_{v}, \circ \right )_{a} $$ , if $$ \mathbf{V}$$ has an identity subspace , then its identity subspace type , is defined as follows:       LEFT : whenever there is a Left Handed Identity Element $$\left ( \mathbf{I}_{L} \right ) $$,        RITE : whenever there is a Right Handed Identity Element $$\left ( \mathbf{I}_{R} \right ) $$,        BOTH : whenever there is a Left Handed Identity Element $$\left ( \mathbf{I}_{L} \right ) $$, and Right Handed Identity Element $$\left ( \mathbf{I}_{R} \right ) $$,       NONE : whenever there is NO Identity Element.

Definition 2-17: For an algebra ((R,⊛,S),∘), the ⊙-product is defined such that: ∀u,v∈S,u⊙v≡(1/2)[(u+[u^{∗}-u]δ_{I_{X}∘u}^{u∘I_{X}})∘v +(v+[v^{∗}-v]δ_{I_{X}∘v}^{v∘I_{X}})∘u], where: δ_{I_{X}∘v}^{v∘I_{X}}=δ_{I_{X}∘u}^{u∘I_{X}}=0, whenever no identity subspace exists.

Definition 2-18: For an algebra ((R,⊛,S),∘), the ×-product is defined such that: is defined such that: ∀u,v∈S,u×v≡(1/2)[(u+[u^{∗}-u]δ_{I_{X}∘u}^{u∘I_{X}})∘v -(v+[v^{∗}-v]δ_{I_{X}∘v}^{v∘I_{X}})∘u], where: δ_{I_{X}∘v}^{v∘I_{X}}=δ_{I_{X}∘u}^{u∘I_{X}}=0, whenever no identity subspace exists.

Theorem 2-9: For an algebra ((R,⊛,S),∘), with  ⊙-product and ×-product: ∀u,v∈S,u⊙v=v⊙u ∀u,v∈S,u×v=-v×u

Theorem 2-10: For an algebra ((R,⊛,S),∘), with  ⊙-product and ×-product: ∀u,v∈S,u∘v=u⊙v+u×v-([u^{∗}-u]δ_{I_{X}∘u}^{u∘I_{X}})∘v

Theorem 2-11: For an R-algebra ((R,⊛,S),∘), with  ⊙-product and ×-product: if the identity subspace type is BOTH, then: ∀u,v∈S,u∘v=u^{∗}⊙v+u^{∗}×v ∀u,v∈S,u⊙v=(1/2)(u^{∗}∘v+v^{∗}∘u) ∀u,v∈S,u×v=(1/2)(u^{∗}∘v-v^{∗}∘u) , if the identity subspace type is NOT BOTH, then: ∀u,v∈S,u∘v=u⊙v+u×v ∀u,v∈S,u⊙v=(1/2)(u∘v+v∘u) ∀u,v∈S,u×v=(1/2)(u∘v-v∘u)

Theorem 2-12: For an R-algebra ((R,⊛,S),∘), with  ×-product, if its identity subspace type is BOTH, then: ∀u,v∈S,u×v=u_{t}⊛v_{s}-v_{t}⊛u_{s}+u_{s}×v_{s}.

Theorem 2-13: For an R-algebra ((R,⊛,S),∘), with  ⊙-product, if its identity subspace type is BOTH, then: ∀u,v∈S,u⊙v=(u_{t}v_{t})⊛I_{B}+u_{s}⊙v_{s}.

Definition 2-19: For an R-algebra ((R,⊛,S),∘), with  ⊙-product and ×-product: if ∀u,v∈S_{s}: u×v∈S_{s} , and u⊙v∈S_{t}  , then the R-algebra has a space-time structure.

Theorem 2-14: For an R-algebra ((R,⊛,S),∘), with  ⊙-product and ×-product: if its identity subspace type is BOTH, and the R-algebra has a space-time structure, then: ∀u,v∈S,u⊙v∈S_{t}, and ∀u,v∈S,u×v∈S_{s}.

Definition 2-24: A Lorentz-Minkowski Vectric Space is a vectric space, (((R,⊛,S),∘),d), with space-time structure, such that: ∀u,v∈S,  u×v∈S_{s}  , u⊙v∈S_{t}, and u_{t}⊙v_{s}+u_{s}⊙v_{t}=0

Definition 3-9: An associative F-algebra ((F,⊛,S),∘)^{A} is an F-algebra ((F,⊛,S),∘) such that: ∀x,y,z∈S,  (x∘y)∘z=x∘(y∘z).

Theorem 3-13a: In an n-dimensional associative R-algebra ((R,⊛,S)_{n},∘)^{A}, with product coeffs β_{ij}^{m}, contravariant basis: (u_{i})_{n}∋ u_{j}=u_{j}(x^{i})_{n}, (∀j∈Z⁺∋1≤j≤n) ,  and vector function f:Rⁿ⇉Rⁿ ,f=f(f^{j}(x^{i})_{n})_{n}=f(x) ; if: ∃f₁∈Rⁿ∋f₁=δf∘(δr)⁻¹, is continuous for arbitrary  δr , then:  ∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{mk}^{j}f_{;h}^{m})=0

Proof: f₁∘δr=(δf∘(δr)⁻¹)∘δr=δf∘((δr)⁻¹∘δr)=δf. So: f₁∘∑_{h=1}ⁿ(u_{h}δx^{h})=∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(∑_{m=1}ⁿ(f^{k}L_{m}^{j}((∂/(∂x^{h}))Γ_{k}^{m}|_{x=ξ})))δx^{h}⊛u_{j}),        for arbitrary  δx^{h}  , so,  ∀h∈Z⁺∋1≤h≤n:                  f₁∘u_{h}=∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(∑_{m=1}ⁿ(f^{k}L_{m}^{j}((∂/(∂x^{h}))Γ_{k}^{m}|_{x=ξ})))⊛u_{j})  , and, so, in the limit (A5-1 & 2), as x→ξ : f₁∘u_{h}=∑_{j=1}ⁿ((((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(f^{k}{j/(hk)}))⊛u_{j}) , ( using Definition 3-7a ), or: f₁∘u_{h}=∑_{j=1}ⁿ(f_{;h}^{j}⊛u_{j}). Now, ∀h,k∈Z⁺∋1≤h,k≤n   : (f₁∘u_{h})∘u_{k}=f₁∘(u_{h}∘u_{k}). ∴               (∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})∘u_{k}=f₁∘(∑_{m=1}ⁿ(β_{hk}^{m}⊛u_{m})) =∑_{m=1}ⁿ(β_{hk}^{m}⊛(f₁∘u_{m})) =∑_{m=1}ⁿ(β_{hk}^{m}⊛(∑_{j=1}ⁿf_{;m}^{j}⊛u_{j})) =∑_{j=1}ⁿ(∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}⊛u_{j}))         (∗1) (∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})∘u_{k}=∑_{j=1}ⁿ(f_{;h}^{j}⊛(u_{j}∘u_{k})) =∑_{j=1}ⁿ(f_{;h}^{j}⊛(∑_{m=1}ⁿβ_{jk}^{m}⊛u_{m})) =∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{j}β_{jk}^{m}⊛u_{m})) =∑_{m=1}ⁿ(∑_{j=1}ⁿ(f_{;h}^{m}β_{mk}^{j}⊛u_{j})) =∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{m}β_{mk}^{j}⊛u_{j}))         (∗2) ∴               ∑_{j=1}ⁿ(∑_{m=1}ⁿ((β_{hk}^{m}f_{;m}^{j}-β_{mk}^{j}f_{;h}^{m})⊛u_{j}))=0 and, so: ∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{mk}^{j}f_{;h}^{m})=0                                                 □.

Theorem 3-13b: In an n-dimensional associative R-algebra ((R,⊛,S)_{n},∘)^{A}, with product coeffs β_{ij}^{m}, contravariant basis: (u_{i})_{n}∋ u_{j}=u_{j}(x^{i})_{n},     (∀j∈Z⁺∋1≤j≤n) , and vector function f:Rⁿ⇉Rⁿ ,f=f(f^{j}(x^{i})_{n})_{n}=f(x) ; if: ∃f₂∈Rⁿ∋f₂=lim_{dr→0}(dr)⁻¹∘df, is continuous, then:  ∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{hm}^{j}f_{;k}^{m})=0

Proof: If ∃f₂∈S∋f₂=lim_{r→0}(dr)⁻¹∘df then lim_{r→0}(dr∘f₂)=lim_{r→0}(dr∘lim_{r→0}((dr)⁻¹∘df))=lim_{r→0}((dr∘(dr)⁻¹)∘df)=lim_{r→0}df ∴          lim_{r→0}∑_{h=1}ⁿ(u_{h}dx^{h})∘f₂= =lim_{r→0}∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(∑_{m=1}ⁿ(f^{k}L_{m}^{j}((∂/(∂x^{h}))Γ_{k}^{m}|_{x=ξ})))dx^{h}⊛u_{j})                         =lim_{r→0}∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(f^{k}{j/(hk)}))dx^{h}⊛u_{j})     for arbitrary  dx^{h}, which are linearly independent, so:                  u_{h}∘f₂=∑_{j=1}ⁿ(f_{;h}^{j}⊛u_{j})  .     And, so, in the limit, as x→ξ :        ∀h∈Z⁺∋1≤h≤n   :                    u_{h}∘f₂=∑_{j=1}ⁿ(f_{;h}^{j}⊛u_{j})    Now, ∀h,k∈Z⁺∋1≤h,k≤n   :                  u_{k}∘(u_{h}∘f₂)=(u_{k}∘u_{h})∘f₂  .    ∴                u_{k}∘(∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})=(∑_{m=1}ⁿ(β_{kh}^{m}⊛u_{m}))∘f₂                                               =∑_{m=1}ⁿ(β_{kh}^{m}⊛(u_{m}∘f₂))                                                =∑_{m=1}ⁿ(β_{kh}^{m}(∑_{j=1}ⁿf_{;m}^{j}⊛u_{j}))                                                =∑_{j=1}ⁿ(∑_{m=1}ⁿ(β_{kh}^{m}f_{;m}^{j}⊛u_{j}))          (∗1) u_{k}∘(∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})=∑_{j=1}ⁿ(f_{;h}^{j}⊛(u_{k}∘u_{j})) =∑_{j=1}ⁿ(f_{;h}^{j}⊛(∑_{m=1}ⁿβ_{kj}^{m}⊛u_{m})) =∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{j}β_{kj}^{m}⊛u_{m})) =∑_{m=1}ⁿ(∑_{j=1}ⁿ(f_{;h}^{m}β_{km}^{j}⊛u_{j})) =∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{m}β_{km}^{j}⊛u_{j})). (∗2)   Therefore, ∑_{j=1}ⁿ(∑_{m=1}ⁿ((β_{kh}^{m}f_{;m}^{j}-β_{km}^{j}f_{;h}^{m})⊛u_{j}))=0 , and, so: ∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{hm}^{j}f_{;k}^{m})=0                                                 □.

In this article, linear analysis is the study of the analysis of a linear function of a linear variable - analogous to the study of the analysis of a complex function of a complex variable.

A holomorphic space is a vector space $$\left ( \mathbf{V}, \circledast , F \right )_{v}$$ , (where the elements of the vector space belong to the set $$\mathbf{V}$$, the elements of the field of the vector space belong to the field $$F$$, and $$\circledast$$ is the binary operation between the elements of $$F$$ and the elements of $$\mathbf{V}$$ ) within which a product is defined $$\forall{x,y}\in{\mathbf{V}}$$, by $$\forall{\left ( x \otimes y \right ) }\in{\mathbf{V}}$$ , such that: anti-symmetry: (i) $$\forall{v,w}\in{\mathbf{V}}:v \otimes w = -w \otimes v $$ bilinearity:  (ii) $$\forall{u,v,w}\in{\mathbf{V}}: \left ( u+v \right ) \otimes w = \left ( u \otimes w \right )+ \left ( v \otimes w \right ), \forall{u,v,w}\in{S}, u \otimes \left ( v+w \right ) = \left ( u \otimes v \right ) + \left ( u \otimes w \right ) $$ homogeneity:   (iii) $$\forall{v,w}\in{\mathbf{V}},\alpha \in{F}: \left ( \alpha \circledast v \right ) \otimes w = \alpha \circledast \left ( v \otimes w \right ) $$.

The Weighted Matrix Product
The Weighted Matrix Product, Weighted Matrix Multiplication is a generalization of ordinary matrix multiplication, in the following way.

Given a set of Weight Matrices, $$ \mathbf{\Phi}_{\alpha}\equiv\left(\Phi_{\alpha ij}\right) $$ the Weighted Matrix Product of the matrix pair $$ (\mathbf A,\mathbf B) $$ is given by:
 * $$ \mathbf{A} \equiv (A_{ij})$$   ,     $$ \mathbf{B} \equiv (B_{ij}) $$
 * $$ \mathbf{A}\circ\mathbf{B} = (\sum_{\alpha=1}^{c(A)} \Phi_{\alpha ij}A_{i\alpha}B_{\alpha j}) $$

where: c(A) is the number of columns of $$ \mathbf{A}. $$

The number of Weight Matrices is: the number of columns of the left operand = the number of rows of the right operand The number of rows of the Weight Matrices is: number of rows of the left operand. The number of columns of the Weight Matrices is: the number of columns of the right operand. The Weighted Matrix Product is defined only if the matrix operands are conformable in the ordinary sense. The resultant matrix has the number of rows of the left operand $$ \mathbf{A} $$, and the number of columns of the right operand $$ \mathbf{B} $$. NOTE: Ordinary Matrix Multiplication is the special case of Weighted Matrix Multiplication, where all the weight matrix entries are 1s. Ordinary Matrix Multiplication is Weighted Matrix Multiplication in a default "sea of 1s ", the weight matrices formed out of the "sea" as necessary. NOTE: The Weighted Matrix Product is not generally associative: Weighted matrix multiplication may be expressed in terms of ordinary matrix multiplication, using matrices constructed from the constituent parts, as follows: for mxp matrix: $$ \mathbf {A} \equiv (a_{mp})$$, and pxn matrix: $$ \mathbf {B} \equiv (b_{pn})$$ , define: $$ \mathbf A_{i} = \begin{bmatrix} a_{1i} & 0 & \cdots & 0 \\ 0 & a_{2i} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & 0 \\ 0 & \cdots & 0 & a_{mi} \end{bmatrix}  , \mathbf B_{j} = \begin{bmatrix} b_{j1} & 0 & \cdots & 0 \\ 0 & b_{j2} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & 0 \\ 0 & \cdots & 0 & b_{jm} \end{bmatrix} $$ then:
 * $$ \mathbf{A}\circ\mathbf{B} = (\sum_{\alpha=1}^{c(A)} \Phi_{\alpha ij}A_{i\alpha}B_{\alpha j}) =

\sum_{\alpha=1}^{c(A)} (\Phi_{\alpha ij}A_{i\alpha}B_{\alpha j}) = \sum_{\alpha=1}^{c(A)} (A_{i\alpha}\Phi_{\alpha ij}B_{\alpha j}) = \sum_{\alpha=1}^{c(A)} \mathbf{A}_{\alpha}\mathbf{\Phi}_{\alpha}\mathbf{B}_{\alpha} $$ The Weighted Matrix product is especially useful in developing matrix bases closed under a (not necessarily associative) product (algebras). As an example, consider the following developments: It is convenient (although not necessary) to begin with permutation matrices as the basis; since they are a known basis and about as simple as there is. the complex plane

\mathbf{u}^{2;0}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} , \mathbf{u}^{2;1}=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $$
 * $$ \mathbf{u}^{2;i}\circ\mathbf{u}^{2;j} $$ is weighted matrix multiplication,

with weights: $$(\mathbf{\Phi}_{h} \equiv\mathbf{\phi}^{2;h}) $$

\mathbf{\phi}^{2;0}=\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} , \mathbf{\phi}^{2;1}=\begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} $$ then: $$ \mathbf{u}_0^{2;0}=\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} , \mathbf{u}_1^{2;0}=\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} $$ $$ \mathbf{u}_0^{2;1}=\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} , \mathbf{u}_1^{2;1}=\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} $$ So:
 * $$ \mathbf{u}^{2;0}\circ\mathbf{u}^{2;0} =

\sum_{h=0}^1 \mathbf{u}_h^{2;0}\mathbf{\phi}^{2;h}\mathbf{u}_h^{2;0} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \mathbf{u}^{2;0} $$
 * $$ \mathbf{u}^{2;0}\circ\mathbf{u}^{2;1} =

\sum_{h=0}^1 \mathbf{u}_h^{2;0}\mathbf{\phi}^{2;h}\mathbf{u}_h^{2;1} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} = \mathbf{u}^{2;1} $$
 * $$ \mathbf{u}^{2;1}\circ\mathbf{u}^{2;0} =

\sum_{h=0}^1 \mathbf{u}_h^{2;1}\mathbf{\phi}^{2;h}\mathbf{u}_h^{2;0} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} = \mathbf{u}^{2;1} $$
 * $$ \mathbf{u}^{2;1}\circ\mathbf{u}^{2;1} =

\sum_{h=0}^1 \mathbf{u}_h^{2;1}\mathbf{\phi}^{2;h}\mathbf{u}_h^{2;1} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} = -\mathbf{u}^{2;0} $$ Thus, is manifested a homomorphism between this and the complex plane.

Quaternions

\mathbf{u}^{4;0}=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} , \mathbf{u}^{4;1}=\begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} , \mathbf{u}^{4;2}=\begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix} , \mathbf{u}^{4;3}=\begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} $$
 * $$ \mathbf{u}^{4;i}\circ\mathbf{u}^{4;j} $$ is weighted matrix multiplication,

with weights: $$(\mathbf{\Phi}_{h} \equiv\mathbf{\phi}^{4;h}) $$

\mathbf{\phi}^{4;0}=\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \\ 1 & -1 & -1 & 1 \end{bmatrix} , \mathbf{\phi}^{4;1}=\begin{bmatrix} -1 & 1 & -1 & 1 \\ 1 & 1 & 1 & 1 \\ -1 & 1 & 1 & -1 \\ -1 & 1 & 1 & -1 \end{bmatrix} , \mathbf{\phi}^{4;2}=\begin{bmatrix} -1 & 1 & 1 & -1 \\ -1 & 1 & 1 & -1 \\ 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \end{bmatrix} , \mathbf{\phi}^{4;3}=\begin{bmatrix} 1 & -1 & -1 & 1 \\ 1 & -1 & -1 & 1 \\ -1 & 1 & -1 & 1 \\ 1 & 1 & 1 & 1 \end{bmatrix}. $$

then

Thus, is manifested a homomorphism between this and the space of quaternions.