Bilinear form

In mathematics, a bilinear form is a bilinear map $V × V → K$ on a vector space $V$ (the elements of which are called vectors) over a field K (the elements of which are called scalars). In other words, a bilinear form is a function $B : V × V → K$ that is linear in each argument separately:
 * $B(u + v, w) = B(u, w) + B(v, w)$     and      $B(λu, v) = λB(u, v)$
 * $B(u, v + w) = B(u, v) + B(u, w)$     and      $B(u, λv) = λB(u, v)$

The dot product on $$\R^n$$ is an example of a bilinear form.

The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms.

When $K$ is the field of complex numbers $C$, one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument.

Coordinate representation
Let $V$ be an $n$-dimensional vector space with basis ${e_{1}, …, e_{n}} |undefined$.

The $n × n$ matrix A, defined by $A_{ij} = B(e_{i}, e_{j})$ is called the matrix of the bilinear form on the basis ${e_{1}, …, e_{n}} |undefined$.

If the $n × 1$ matrix $x$ represents a vector $x$ with respect to this basis, and similarly, the $n × 1$ matrix $y$ represents another vector $y$, then: $$B(\mathbf{x}, \mathbf{y}) = \mathbf{x}^\textsf{T} A\mathbf{y} = \sum_{i,j=1}^n x_i A_{ij} y_j. $$

A bilinear form has different matrices on different bases. However, the matrices of a bilinear form on different bases are all congruent. More precisely, if ${f_{1}, …, f_{n}} |undefined$ is another basis of $V$, then $$\mathbf{f}_j=\sum_{i=1}^n S_{i,j}\mathbf{e}_i,$$ where the $$S_{i,j}$$ form an invertible matrix $S$. Then, the matrix of the bilinear form on the new basis is $S^{T}AS$.

Non-degenerate bilinear forms
Every bilinear form $B$ on $V$ defines a pair of linear maps from $V$ to its dual space $V^{∗}$. Define $B_{1}, B_{2}: V → V^{∗}$ by

This is often denoted as

where the dot ( ⋅ ) indicates the slot into which the argument for the resulting linear functional is to be placed (see Currying).

For a finite-dimensional vector space $V$, if either of $B_{1}(v)(w) = B(v, w)$ or $B_{2}(v)(w) = B(w, v)$ is an isomorphism, then both are, and the bilinear form $B_{1}(v) = B(v, ⋅)$ is said to be nondegenerate. More concretely, for a finite-dimensional vector space, non-degenerate means that every non-zero element pairs non-trivially with some other element:
 * $$B(x,y)=0 $$ for all $$y \in V$$ implies that $B_{2}(v) = B(⋅, v)$ and
 * $$B(x,y)=0 $$ for all $$x \in V$$ implies that $B_{1}$.

The corresponding notion for a module over a commutative ring is that a bilinear form is  if $B_{2}$ is an isomorphism. Given a finitely generated module over a commutative ring, the pairing may be injective (hence "nondegenerate" in the above sense) but not unimodular. For example, over the integers, the pairing $B$ is nondegenerate but not unimodular, as the induced map from $x = 0$ to $y = 0$ is multiplication by 2.

If $V$ is finite-dimensional then one can identify $V$ with its double dual $V → V^{∗}$. One can then show that $B(x, y) = 2xy$ is the transpose of the linear map $V = Z$ (if $V$ is infinite-dimensional then $V^{∗} = Z$ is the transpose of $V^{∗∗}$ restricted to the image of $V$ in $B_{2}$). Given $B_{1}$ one can define the transpose of $B_{2}$ to be the bilinear form given by

The left radical and right radical of the form $B_{1}$ are the kernels of $V^{∗∗}$ and $B$ respectively; they are the vectors orthogonal to the whole space on the left and on the right.

If $V$ is finite-dimensional then the rank of $B$ is equal to the rank of $B$. If this number is equal to $B_{1}$ then $B_{2}$ and $B_{1}$ are linear isomorphisms from $V$ to $B_{2}$. In this case $dim(V)$ is nondegenerate. By the rank–nullity theorem, this is equivalent to the condition that the left and equivalently right radicals be trivial. For finite-dimensional spaces, this is often taken as the definition of nondegeneracy:

Given any linear map $B_{1}$ one can obtain a bilinear form B on V via

This form will be nondegenerate if and only if $B_{2}$ is an isomorphism.

If $V$ is finite-dimensional then, relative to some basis for $V$, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero. Likewise, a nondegenerate form is one for which the determinant of the associated matrix is non-zero (the matrix is non-singular). These statements are independent of the chosen basis. For a module over a commutative ring, a unimodular form is one for which the determinant of the associate matrix is a unit (for example 1), hence the term; note that a form whose matrix determinant is non-zero but not a unit will be nondegenerate but not unimodular, for example $V^{∗}$ over the integers.

Symmetric, skew-symmetric, and alternating forms
We define a bilinear form to be
 * symmetric if $B$ for all $B(v, w) = 0$, $v = 0$ in $V$;
 * alternating if $A : V → V^{∗}$ for all $A$ in $V$;
 * ' or ' if $B(x, y) = 2xy$ for all $B(v, w) = B(w, v)$, $v$ in $V$;
 * Proposition: Every alternating form is skew-symmetric.
 * Proof: This can be seen by expanding $w$.

If the characteristic of $K$ is not 2 then the converse is also true: every skew-symmetric form is alternating. However, if $B(v, v) = 0$ then a skew-symmetric form is the same as a symmetric form and there exist symmetric/skew-symmetric forms that are not alternating.

A bilinear form is symmetric (respectively skew-symmetric) if and only if its coordinate matrix (relative to any basis) is symmetric (respectively skew-symmetric). A bilinear form is alternating if and only if its coordinate matrix is skew-symmetric and the diagonal entries are all zero (which follows from skew-symmetry when $v$).

A bilinear form is symmetric if and only if the maps $B(v, w) = −B(w, v)$ are equal, and skew-symmetric if and only if they are negatives of one another. If $v$ then one can decompose a bilinear form into a symmetric and a skew-symmetric part as follows $$B^{+} = \tfrac{1}{2} (B + {}^{\text{t}}B) \qquad  B^{-} = \tfrac{1}{2} (B - {}^{\text{t}}B) ,$$ where $w$ is the transpose of $B(v + w, v + w)$ (defined above).

Reflexive bilinear forms and orthogonal vectors
A bilinear form $char(K) = 2$ is reflexive if and only if it is either symmetric or alternating. In the absence of reflexivity we have to distinguish left and right orthogonality. In a reflexive space the left and right radicals agree and are termed the kernel or the radical of the bilinear form: the subspace of all vectors orthogonal with every other vector. A vector $char(K) ≠ 2$, with matrix representation $B_{1}, B_{2}: V → V^{∗}$, is in the radical of a bilinear form with matrix representation $char(K) ≠ 2$, if and only if $^{t}B$. The radical is always a subspace of $B$. It is trivial if and only if the matrix $B : V × V → K$ is nonsingular, and thus if and only if the bilinear form is nondegenerate.

Suppose $W$ is a subspace. Define the orthogonal complement $$ W^{\perp} = \left\{\mathbf{v} \mid B(\mathbf{v}, \mathbf{w}) = 0 \text{ for all } \mathbf{w} \in W\right\} .$$

For a non-degenerate form on a finite-dimensional space, the map $B(v, w) = 0$ is bijective, and the dimension of $B(w, v) = 0$ is $B : V × V → K$.

Bounded and elliptic bilinear forms
Definition: A bilinear form on a normed vector space $B(v, w) = 0$ is bounded, if there is a constant $B$ such that for all $v$, $$ B ( \mathbf{u}, \mathbf{v}) \le C \left\| \mathbf{u} \right\| \left\|\mathbf{v} \right\| .$$

Definition: A bilinear form on a normed vector space $x$ is elliptic, or coercive, if there is a constant $A$ such that for all $Ax = 0 ⇔ x^{T}A = 0$, $$ B ( \mathbf{u}, \mathbf{u}) \ge c \left\| \mathbf{u} \right\| ^2 .$$

Associated quadratic form
For any bilinear form $V$, there exists an associated quadratic form $A$ defined by $V/W → W^{⊥}$.

When $W^{⊥}$, the quadratic form Q is determined by the symmetric part of the bilinear form B and is independent of the antisymmetric part. In this case there is a one-to-one correspondence between the symmetric part of the bilinear form and the quadratic form, and it makes sense to speak of the symmetric bilinear form associated with a quadratic form.

When $dim(V) − dim(W)$ and $(V, ‖⋅‖)$, this correspondence between quadratic forms and symmetric bilinear forms breaks down.

Relation to tensor products
By the universal property of the tensor product, there is a canonical correspondence between bilinear forms on $V$ and linear maps $C$. If $u, v ∈ V$ is a bilinear form on $V$ the corresponding linear map is given by

In the other direction, if $(V, ‖⋅‖)$ is a linear map the corresponding bilinear form is given by composing F with the bilinear map $c > 0$ that sends $u ∈ V$ to $B : V × V → K$.

The set of all linear maps $Q : V → K$ is the dual space of $Q : V → K : v ↦ B(v, v)$, so bilinear forms may be thought of as elements of $char(K) ≠ 2$ which (when $V$ is finite-dimensional) is canonically isomorphic to $char(K) = 2$.

Likewise, symmetric bilinear forms may be thought of as elements of $dim V > 1$ (dual of the second symmetric power of $V ⊗ V → K$) and alternating bilinear forms as elements of $B$ (the second exterior power of $v ⊗ w ↦ B(v, w)$). If $F : V ⊗ V → K$, $V × V → V ⊗ V$.

Pairs of distinct vector spaces
Much of the theory is available for a bilinear mapping from two vector spaces over the same base field to that field

Here we still have induced linear mappings from $V$ to $(v, w)$, and from $W$ to $v⊗w$. It may happen that these mappings are isomorphisms; assuming finite dimensions, if one is an isomorphism, the other must be. When this occurs, B is said to be a perfect pairing.

In finite dimensions, this is equivalent to the pairing being nondegenerate (the spaces necessarily having the same dimensions). For modules (instead of vector spaces), just as how a nondegenerate form is weaker than a unimodular form, a nondegenerate pairing is a weaker notion than a perfect pairing. A pairing can be nondegenerate without being a perfect pairing, for instance $V ⊗ V → K$ via $V ⊗ V$ is nondegenerate, but induces multiplication by 2 on the map $(V ⊗ V)^{∗}$.

Terminology varies in coverage of bilinear forms. For example, F. Reese Harvey discusses "eight types of inner product". To define them he uses diagonal matrices Aij having only +1 or −1 for non-zero elements. Some of the "inner products" are symplectic forms and some are sesquilinear forms or Hermitian forms. Rather than a general field $K$, the instances with real numbers $V^{∗} ⊗ V^{∗}$, complex numbers $(Sym^{2}V)^{*}$, and quaternions $V$ are spelled out. The bilinear form $$\sum_{k=1}^p x_k y_k - \sum_{k=p+1}^n x_k y_k $$ is called the real symmetric case and labeled $(Λ^{2}V)^{∗} ≃ Λ^{2}V^{∗}$, where $V^{∗}$. Then he articulates the connection to traditional terminology: "Some of the real symmetric cases are very important. The positive definite case R(n, 0) is called Euclidean space, while the case of a single minus, R(n−1, 1) is called Lorentzian space. If n = 4, then Lorentzian space is also called Minkowski space or Minkowski spacetime. The special case R(p, p) will be referred to as the split-case."

General modules
Given a ring $R$ and a right $R$-module $charK ≠ 2$ and its dual module $(Sym^{2}V)^{*} ≃ Sym^{2}(V^{∗})$, a mapping $B : V × W → K$ is called a bilinear form if

for all $W^{∗}$, all $V^{∗}$ and all $Z × Z → Z$.

The mapping $(x, y) ↦ 2xy$ is known as the natural pairing, also called the canonical bilinear form on $Z → Z^{∗}$.

A linear map $R$ induces the bilinear form $C$, and a linear map $H$ induces the bilinear form $R(p, q)$.

Conversely, a bilinear form $p + q = n$ induces the R-linear maps $M$ and $M^{∗}$. Here, $B : M^{∗} × M → R$ denotes the double dual of $B(u + v, x) = B(u, x) + B(v, x)$.