Unisolvent functions

In mathematics, a set of n functions f1, f2, ..., fn is unisolvent (meaning "uniquely solvable") on a domain Ω if the vectors


 * $$\begin{bmatrix}f_1(x_1) \\ f_1(x_2) \\ \vdots \\ f_1(x_n)\end{bmatrix}, \begin{bmatrix}f_2(x_1) \\ f_2(x_2) \\ \vdots \\ f_2(x_n)\end{bmatrix}, \dots, \begin{bmatrix}f_n(x_1) \\ f_n(x_2) \\ \vdots \\ f_n(x_n)\end{bmatrix}$$

are linearly independent for any choice of n distinct points x1, x2 ... xn in Ω. Equivalently, the collection is unisolvent if the matrix F with entries fi(xj) has nonzero determinant: det(F) ≠ 0 for any choice of distinct xj's in Ω. Unisolvency is a property of vector spaces, not just particular sets of functions. That is, a vector space of functions of dimension n is unisolvent if given any basis (equivalently, a linearly independent set of n functions), the basis is unisolvent (as a set of functions). This is because any two bases are related by an invertible matrix (the change of basis matrix), so one basis is unisolvent if and only if any other basis is unisolvent.

Unisolvent systems of functions are widely used in interpolation since they guarantee a unique solution to the interpolation problem. The set of polynomials of degree at most $d$ (which form a vector space of dimension $d + 1$) are unisolvent by the unisolvence theorem.

Examples

 * 1, x, x2 is unisolvent on any interval by the unisolvence theorem
 * 1, x2 is unisolvent on [0, 1], but not unisolvent on [−1, 1]
 * 1, cos(x), cos(2x), ..., cos(nx), sin(x), sin(2x), ..., sin(nx) is unisolvent on [−π, π]
 * Unisolvent functions are used in linear inverse problems.

Unisolvence in the finite element method
When using "simple" functions to approximate an unknown function, such as in the finite element method, it is useful to consider a set of functionals $$\{f_i\}_{i=1}^n$$ that act on a finite dimensional vector space $$V_h$$ of functions, usually polynomials. Often, the functionals are given by evaluation at points in Euclidean space or some subset of it.

For example, let $$V_h = \big\{ p(x) = \sum_{k=0}^n p_k x^k \big\}$$ be the space of univariate polynomials of degree $$n$$ or less, and let $$f_k(p) := f\Big(\frac{k}{n}\Big)$$ for $$0\leq k \leq n$$ be defined by evaluation at $$n+1$$ equidistant points on the unit interval $$[0,1]$$. In this context, the unisolvence of $$V_h$$ with respect to $$\{f_k\}_{k=1}^n$$ means that $$\{f_k\}_{k=1}^n$$ is a basis for $$V_h^*$$, the dual space of $$V_h$$. Equivalently, and perhaps more intuitively, unisolvence here means that given any set of values $$\{c_k\}_{k=1}^n$$, there exists a unique polynomial $$q(x) \in V_h$$ such that $$f_k(q) = q( \tfrac{k}{n} ) = c_k$$. Results of this type are widely applied in polynomial interpolation; given any function on $$\phi \in C([0,1])$$, by letting $$c_k = \phi( \tfrac{k}{n})$$, we can find a polynomial $$q\in V_h$$ that interpolates $$\phi$$ at each of the $$n+1$$ points:. $$\phi(\tfrac{k}{n}) = q(\tfrac{k}{n}), \ \forall k \in \{0,1,..,n\}$$

Dimensions
Systems of unisolvent functions are much more common in 1 dimension than in higher dimensions. In dimension d = 2 and higher (Ω ⊂ Rd), the functions f1, f2, ..., fn cannot be unisolvent on Ω if there exists a single open set on which they are all continuous. To see this, consider moving points x1 and x2 along continuous paths in the open set until they have switched positions, such that x1 and x2 never intersect each other or any of the other xi. The determinant of the resulting system (with x1 and x2 swapped) is the negative of the determinant of the initial system. Since the functions fi are continuous, the intermediate value theorem implies that some intermediate configuration has determinant zero, hence the functions cannot be unisolvent.