Multilinear map

In linear algebra, a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function


 * $$f\colon V_1 \times \cdots \times V_n \to W\text{,}$$

where $$V_1,\ldots,V_n$$ ($$n\in\mathbb Z_{\ge0}$$) and $$W$$ are vector spaces (or modules over a commutative ring), with the following property: for each $$i$$, if all of the variables but $$v_i$$ are held constant, then $$f(v_1, \ldots, v_i, \ldots, v_n)$$ is a linear function of $$v_i$$. One way to visualize this is to imagine two orthogonal vectors; if one of these vectors is scaled by a factor of 2 while the other remains unchanged, the cross product likewise scales by a factor of two. If both are scaled by a factor of 2, the cross product scales by a factor of $$2^2$$.

A multilinear map of one variable is a linear map, and of two variables is a bilinear map. More generally, for any nonnegative integer $$k$$, a multilinear map of k variables is called a k-linear map. If the codomain of a multilinear map is the field of scalars, it is called a multilinear form. Multilinear maps and multilinear forms are fundamental objects of study in multilinear algebra.

If all variables belong to the same space, one can consider symmetric, antisymmetric and alternating k-linear maps. The latter two coincide if the underlying ring (or field) has a characteristic different from two, else the former two coincide.

Examples

 * Any bilinear map is a multilinear map. For example, any inner product on a $$\mathbb R$$-vector space is a multilinear map, as is the cross product of vectors in $$\mathbb{R}^3$$.
 * The determinant of a matrix is an alternating multilinear function of the columns (or rows) of a square matrix.
 * If $$F\colon \mathbb{R}^m \to \mathbb{R}^n$$ is a Ck function, then the $$k$$th derivative of $$F$$ at each point $$p$$ in its domain can be viewed as a symmetric $$k$$-linear function $$D^k\!F\colon \mathbb{R}^m\times\cdots\times\mathbb{R}^m \to \mathbb{R}^n$$.

Coordinate representation
Let


 * $$f\colon V_1 \times \cdots \times V_n \to W\text{,}$$

be a multilinear map between finite-dimensional vector spaces, where $$V_i\!$$ has dimension $$d_i\!$$, and $$W\!$$ has dimension $$d\!$$. If we choose a basis $$\{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\}$$ for each $$V_i\!$$ and a basis $$\{\textbf{b}_1,\ldots,\textbf{b}_d\}$$ for $$W\!$$ (using bold for vectors), then we can define a collection of scalars $$A_{j_1\cdots j_n}^k$$ by


 * $$f(\textbf{e}_{1j_1},\ldots,\textbf{e}_{nj_n}) = A_{j_1\cdots j_n}^1\,\textbf{b}_1 + \cdots + A_{j_1\cdots j_n}^d\,\textbf{b}_d.$$

Then the scalars $$\{A_{j_1\cdots j_n}^k \mid 1\leq j_i\leq d_i, 1 \leq k \leq d\}$$ completely determine the multilinear function $$f\!$$. In particular, if


 * $$\textbf{v}_i = \sum_{j=1}^{d_i} v_{ij} \textbf{e}_{ij}\!$$

for $$1 \leq i \leq n\!$$, then


 * $$f(\textbf{v}_1,\ldots,\textbf{v}_n) = \sum_{j_1=1}^{d_1} \cdots \sum_{j_n=1}^{d_n} \sum_{k=1}^{d} A_{j_1\cdots j_n}^k v_{1j_1}\cdots v_{nj_n} \textbf{b}_k.$$

Example
Let's take a trilinear function


 * $$g\colon R^2 \times R^2 \times R^2 \to R, $$

where $V_{i} = R^{2}, d_{i} = 2, i = 1,2,3$, and $W = R, d = 1$.

A basis for each $V_{i}$ is $$\{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\} = \{\textbf{e}_{1}, \textbf{e}_{2}\} = \{(1,0), (0,1)\}.$$ Let


 * $$g(\textbf{e}_{1i},\textbf{e}_{2j},\textbf{e}_{3k}) = f(\textbf{e}_{i},\textbf{e}_{j},\textbf{e}_{k}) = A_{ijk},$$

where $$i,j,k \in \{1,2\}$$. In other words, the constant $$A_{i j k}$$ is a function value at one of the eight possible triples of basis vectors (since there are two choices for each of the three $$V_i$$), namely:

\{\textbf{e}_1, \textbf{e}_1, \textbf{e}_1\}, \{\textbf{e}_1, \textbf{e}_1, \textbf{e}_2\}, \{\textbf{e}_1, \textbf{e}_2, \textbf{e}_1\}, \{\textbf{e}_1, \textbf{e}_2, \textbf{e}_2\}, \{\textbf{e}_2, \textbf{e}_1, \textbf{e}_1\}, \{\textbf{e}_2, \textbf{e}_1, \textbf{e}_2\}, \{\textbf{e}_2, \textbf{e}_2, \textbf{e}_1\}, \{\textbf{e}_2, \textbf{e}_2, \textbf{e}_2\}. $$

Each vector $$\textbf{v}_i \in V_i = R^2$$ can be expressed as a linear combination of the basis vectors


 * $$\textbf{v}_i = \sum_{j=1}^{2} v_{ij} \textbf{e}_{ij} = v_{i1} \times \textbf{e}_1 + v_{i2} \times \textbf{e}_2 = v_{i1} \times (1, 0) + v_{i2} \times (0, 1).$$

The function value at an arbitrary collection of three vectors $$\textbf{v}_i \in R^2$$ can be expressed as
 * $$g(\textbf{v}_1,\textbf{v}_2, \textbf{v}_3) = \sum_{i=1}^{2} \sum_{j=1}^{2} \sum_{k=1}^{2} A_{i j k} v_{1i} v_{2j} v_{3k},$$

or in expanded form as
 * $$ \begin{align}

g((a,b),(c,d)&, (e,f)) = ace \times g(\textbf{e}_1, \textbf{e}_1, \textbf{e}_1) + acf \times g(\textbf{e}_1, \textbf{e}_1, \textbf{e}_2) \\ &+ ade \times g(\textbf{e}_1, \textbf{e}_2, \textbf{e}_1) + adf \times g(\textbf{e}_1, \textbf{e}_2, \textbf{e}_2) + bce \times g(\textbf{e}_2, \textbf{e}_1, \textbf{e}_1) + bcf \times g(\textbf{e}_2, \textbf{e}_1, \textbf{e}_2) \\ &+ bde \times g(\textbf{e}_2, \textbf{e}_2, \textbf{e}_1) + bdf \times g(\textbf{e}_2, \textbf{e}_2, \textbf{e}_2). \end{align} $$

Relation to tensor products
There is a natural one-to-one correspondence between multilinear maps


 * $$f\colon V_1 \times \cdots \times V_n \to W\text{,}$$

and linear maps


 * $$F\colon V_1 \otimes \cdots \otimes V_n \to W\text{,}$$

where $$V_1 \otimes \cdots \otimes V_n\!$$ denotes the tensor product of $$V_1,\ldots,V_n$$. The relation between the functions $$f$$ and $$F$$ is given by the formula


 * $$f(v_1,\ldots,v_n)=F(v_1\otimes \cdots \otimes v_n).$$

Multilinear functions on n&times;n matrices
One can consider multilinear functions, on an $n&times;n$ matrix over a commutative ring $K$ with identity, as a function of the rows (or equivalently the columns) of the matrix. Let $A$ be such a matrix and $a_{i}, 1 ≤ i ≤ n$, be the rows of $A$. Then the multilinear function $D$ can be written as


 * $$D(A) = D(a_{1},\ldots,a_{n}),$$

satisfying


 * $$D(a_{1},\ldots,c a_{i} + a_{i}',\ldots,a_{n}) = c D(a_{1},\ldots,a_{i},\ldots,a_{n}) + D(a_{1},\ldots,a_{i}',\ldots,a_{n}).$$

If we let $$\hat{e}_j$$ represent the $j$th row of the identity matrix, we can express each row $a_{i}$ as the sum


 * $$a_{i} = \sum_{j=1}^n A(i,j)\hat{e}_{j}.$$

Using the multilinearity of $D$ we rewrite $D(A)$ as



D(A) = D\left(\sum_{j=1}^n A(1,j)\hat{e}_{j}, a_2, \ldots, a_n\right) = \sum_{j=1}^n A(1,j) D(\hat{e}_{j},a_2,\ldots,a_n). $$

Continuing this substitution for each $a_{i}$ we get, for $1 ≤ i ≤ n$,



D(A) = \sum_{1\le k_1 \le n} \ldots \sum_{1\le k_i \le n} \ldots \sum_{1\le k_n \le n} A(1,k_{1})A(2,k_{2})\dots A(n,k_{n}) D(\hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}}). $$

Therefore, $D(A)$ is uniquely determined by how $D$ operates on $$\hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}}$$.

Example
In the case of 2&times;2 matrices, we get



D(A) = A_{1,1}A_{1,2}D(\hat{e}_1,\hat{e}_1) + A_{1,1}A_{2,2}D(\hat{e}_1,\hat{e}_2) + A_{1,2}A_{2,1}D(\hat{e}_2,\hat{e}_1) + A_{1,2}A_{2,2}D(\hat{e}_2,\hat{e}_2), \, $$

where $$\hat{e}_1 = [1,0]$$ and $$\hat{e}_2 = [0,1]$$. If we restrict $$D$$ to be an alternating function, then $$D(\hat{e}_1,\hat{e}_1) = D(\hat{e}_2,\hat{e}_2) = 0$$ and $$D(\hat{e}_2,\hat{e}_1) = -D(\hat{e}_1,\hat{e}_2) = -D(I)$$. Letting $$D(I) = 1$$, we get the determinant function on 2&times;2 matrices:


 * $$ D(A) = A_{1,1}A_{2,2} - A_{1,2}A_{2,1} .$$

Properties

 * A multilinear map has a value of zero whenever one of its arguments is zero.