User:Teply/sandbox

Logical theory of confirmation
Once the logical relationship between a hypothesis and data is fully specified, the accidental historical ordering of hypothesis and data, the conceited intentions of the scientist, etc., are irrelevant.

Multilinear functions
A multilinear function of k multivector variables is a function that is separately linear in each of its arguments.


 * $$\begin{align}f(\alpha_1 X_1 + \beta_1 Y_1, \cdots, X_k) &= \alpha_1 f(X_1, \cdots , X_k) + \beta_1 f(Y_1, \cdots , X_k) \\

&\cdots \\ f(X_1, \cdots, \alpha_k X_k + \beta_k Y_k) &= \alpha_k f(X_1, \cdots , X_k) + \beta_k f(X_1, \cdots , Y_k)\end{align}$$

This class of functions can operate on all multivectors instead of just vectors, so it is more useful to consider only those functions that extend the linear algebra of vectors. So again, we usually consider multilinear functions that are outermorphisms, i.e. those functions that obey the following relations:


 * $$\begin{align}\mathsf{f}(X_1 \wedge Y_1, \cdots, X_k) &= \mathsf{f}(X_1, \cdots , X_k) \wedge \mathsf{f}(Y_1, \cdots , X_k) \\

&\cdots \\ \mathsf{f}(X_1, \cdots, X_k \wedge Y_k) &= \mathsf{f}(X_1, \cdots , X_k) \wedge \mathsf{f}(X_1, \cdots , Y_k) \\ \\ \mathsf{f}(\alpha_1 X_1 + \beta_1 Y_1, \cdots, X_k) &= \alpha_1 \mathsf{f}(X_1, \cdots , X_k) + \beta_1 \mathsf{f}(Y_1, \cdots , X_k) \\ &\cdots \\ \mathsf{f}(X_1, \cdots, \alpha_k X_k + \beta_k Y_k) &= \alpha_k \mathsf{f}(X_1, \cdots , X_k) + \beta_k \mathsf{f}(X_1, \cdots , Y_k)\end{align}$$

The geometric, inner, outer, and commutator products are all examples of bilinear functions. The geometric product of k variables is a k-linear function.

Geometric interpretation
Basic geometric concepts involving vectors are easily represented in geometric algebra. Among these are projections, rejections, reflections, rotations, and n-parallelotopes.

Projection and rejection


For any vector a and any invertible vector m,
 * $$\, a = amm^{-1} = (a\cdot m + a \wedge m)m^{-1} = a_{\| m} + a_{\perp m} $$

where the projection of a onto m (or the parallel part) is
 * $$\, a_{\| m} = (a\cdot m)m^{-1} $$

and the rejection of a onto m (or the perpendicular part) is
 * $$\, a_{\perp m} = a - a_{\| m} = (a\wedge m)m^{-1} .$$

Using the concept of a k-blade B as representing a subspace of V and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible k-blade B as
 * $$\, \mathcal{P}_B (A) = (A \;\big\lrcorner\; B^{-1}) \;\big\lrcorner\; B $$

with the rejection being defined as
 * $$\, \mathcal{P}_B^\perp (A) = A - \mathcal{P}_B (A) .$$

The projection and rejection generalize to null blades B by replacing the inverse B−1 with the pseudoinverse B+ with respect to the contractive product. The outcome of the projection coincides in both cases for non-null blades.. For null blades B, the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used, as only then is the result necessarily in the subspace represented by B. The projection generalizes through linearity to general multivectors A. The projection is not linear in B and does not generalize to objects B that are not blades.

Reflections
The definition of a reflection occurs in two forms in the literature. Several authors work with reflection along a vector (negating only the component parallel to the specifying vector, or reflection in the hypersurface orthogonal to that vector), while others work with reflection on a vector (negating all vector components except that parallel to the specifying vector). Either may be used to build general versor operations, but the latter has the advantage that it extends to the algebra in a simpler and algebraically more regular fashion.

Reflection along a vector


The reflection of a vector a along a vector m, or equivalently in the hyperplane orthogonal to m, is the same as negating the component of a vector parallel to m. The result of the reflection will be
 * $$\! a' = {-a_{\| m} + a_{\perp m}} = {-(a \cdot m)m^{-1} + (a \wedge m)m^{-1}}

= {(-m \cdot a - m \wedge a)m^{-1}} = -mam^{-1} $$

This is not the most general operation that may be regarded as a reflection when the dimension n ≥ 4. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection of a vector may be written
 * $$\! a \mapsto -MaM^{-1} $$

where
 * $$\! M = pq \ldots r$$ and $$\! M^{-1} = (pq \ldots r)^{-1} = r^{-1} \ldots q^{-1}p^{-1} .$$

If we define the reflection along a non-null vector m of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example,
 * $$ (abc)' = a'b'c' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1}) = -ma(m^{-1}m)b(m^{-1}m)cm^{-1} = -mabcm^{-1} \,$$

and for the product of an even number of vectors that
 * $$ (abcd)' = a'b'c'd' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1})(-mdm^{-1})

= mabcdm^{-1} .\,$$

Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector A using any reflection versor M may be written
 * $$\, A \mapsto M\alpha(A)M^{-1} ,$$

where α is the automorphism of reflection through the origin of the vector space (v ↦ −v) extended through multilinearity to the whole algebra.

Reflection on a vector


The result of reflecting a vector a on another vector n is to negate the rejection of a. It is akin to reflecting the vector a through the origin, except that the projection of a onto n is not reflected. Such an operation is described by
 * $$\, a \mapsto nan^{-1} .$$

Repeating this operation results in a general versor operation (including both rotations and reflections) of an general multivector A being expressed as
 * $$\, A \mapsto NAN^{-1} .$$

This allows a general definition of any versor N (including both reflections and rotors) as an object that can be expressed as geometric product of any number of non-null 1-vectors. Such a versor can be applied in a uniform sandwich product as above irrespective of whether it is of even (a proper rotation) or odd grade (an improper rotation i.e. general reflection). The set of all versors with the geometric product as the group operation constitutes the Clifford group of the Clifford algebra Cℓp,q(R).

Rotations


If we have a product of vectors $$R = a_1a_2....a_r$$ then we denote the reverse as
 * $$R^{\dagger}= (a_1a_2....a_r)^{\dagger} = a_r....a_2a_1$$.

As an example, assume that $$ R = ab $$ we get
 * $$RR^{\dagger} = abba = ab^2a =a^2b^2 = R^{\dagger}R$$.

Scaling $R$ so that $RR^{†} = 1$ then
 * $$(RvR^{\dagger})^2 = Rv^{2}R^{\dagger}= v^2RR^{\dagger} = v^2 $$

so $$RvR$$† leaves the length of $$v$$ unchanged. We can also show that
 * $$(Rv_1R^{\dagger}) \cdot (Rv_2R^{\dagger}) = v_1 \cdot v_2$$

so the transformation $RvR^{†}$ preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; $R$ is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a versor (presumably for historical reasons).

There is a general method for rotating a vector involving the formation of a multivector of the form $$ R = e^{-\frac{B \theta}{2}} $$ that produces a rotation $$ \theta $$ in the plane and with the orientation defined by a bivector $$ B $$.

Rotors are a generalization of quaternions to n-D spaces.

For more about reflections, rotations and "sandwiching" products like $RvR^{†}$ see Plane of rotation.

Hypervolume of an n-parallelotope spanned by n vectors
For vectors $$ a $$ and $$ b $$ spanning a parallelogram we have
 * $$ a \wedge b = ((a \wedge b) b^{-1}) b = a_{\perp b} b $$

with the result that $$ a \wedge b$$ is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area.

Similar interpretations are true for any number of vectors spanning an n-dimensional parallelotope; the outer product of vectors a1, a2, ... an, that is $$\bigwedge_{i=1}^n a_i $$, has a magnitude equal to the volume of the n-parallelotope. An n-vector doesn't necessarily have a shape of a parallelotope - this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope.

Linear functions
An important class of functions of multivectors are the linear functions mapping multivectors to multivectors. The geometric algebra of an n-dimensional vector space is spanned by 2n standard basis elements. If a multivector in this basis is represented by a 2n x 1 column matrix, then all linear transformations of the multivector can be written as the matrix multiplication of a 2n x 2n matrix on the column, just as in the entire theory of linear algebra in 2n dimensions.

Such a broad class is often too general. A more restricted class of linear functions deals only with those that are grade-preserving. These are the linear functions that map scalars to scalars, vectors to vectors, bivectors to bivectors, etc. In matrix representation, the grade-preserving linear functions are block diagonal matrices, where each r-grade block is of size $$\binom nr \times \binom nr$$.

Often a linear transformation from vectors to vectors is already of known interest. There is no unique way to generalize these transformations to the entire geometric algebra without further restriction because any 2n x 2n matrix with a $$\binom n1 \times \binom n1$$ block that agrees with the vector transformation is an acceptable linear function of the entire geometric algebra. We therefore seek a new rule, motivated by geometric interpretation, for generalizing these linear transformations of vectors in a standard way. A natural choice is that of the outermorphism of the linear transformation because it extends the concepts of reflection and rotation in a straightforward way.