Lagrange polynomial



In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.

Given a data set of coordinate pairs $$(x_j, y_j)$$ with $$0 \leq j \leq k,$$ the $$x_j$$ are called nodes and the $$y_j$$ are called values. The Lagrange polynomial $$L(x)$$ has degree $\leq k$ and assumes each value at the corresponding node, $$L(x_j) = y_j.$$

Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler.

Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Shamir's secret sharing scheme in cryptography, and Reed–Solomon error correction in coding theory.

For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation.

Definition
Given a set of $k + 1$ nodes $$\{x_0, x_1, \ldots, x_k\}$$, which must all be distinct, $$x_j \neq x_m$$ for indices $$j \neq m$$, the Lagrange basis for polynomials of degree $\leq k$  for those nodes is the set of polynomials $\{\ell_0(x), \ell_1(x), \ldots, \ell_k(x)\}$  each of degree $k$  which take values $\ell_j(x_m) = 0$  if $m \neq j$  and $\ell_j(x_j) = 1$. Using the Kronecker delta this can be written $\ell_j(x_m) = \delta_{jm}.$ Each basis polynomial can be explicitly described by the product:

$$\begin{aligned} \ell_j(x) &= \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j - 1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_k)}{(x_j-x_k)} \\[8mu] &= \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m} \vphantom\Bigg|. \end{aligned}$$

Notice that the numerator $\prod_{m \neq j}(x - x_m)$ has $k$  roots at the nodes $\{x_m\}_{m \neq j}$  while the denominator $\prod_{m \neq j}(x_j - x_m)$  scales the resulting polynomial so that $\ell_j(x_j) = 1.$

The Lagrange interpolating polynomial for those nodes through the corresponding values $$\{y_0, y_1, \ldots, y_k\}$$ is the linear combination:

$$L(x) = \sum_{j=0}^{k} y_j \ell_j(x).$$

Each basis polynomial has degree $k$, so the sum $L(x)$ has degree $\leq k$ , and it interpolates the data because $L(x_m) = \sum_{j=0}^{k} y_j \ell_j(x_m) = \sum_{j=0}^{k} y_j \delta_{mj} = y_m.$

The interpolating polynomial is unique. Proof: assume the polynomial $M(x)$ of degree $\leq k$  interpolates the data. Then the difference $M(x) - L(x)$ is zero at $k + 1$  distinct nodes $\{x_0, x_1, \ldots, x_k\}.$  But the only polynomial of degree $\leq k$  with more than $k$  roots is the constant zero function, so $M(x) - L(x) = 0,$  or $M(x) = L(x).$

Barycentric form
Each Lagrange basis polynomial $\ell_j(x)$ can be rewritten as the product of three parts, a function $\ell(x) = \prod_m (x - x_m)$  common to every basis polynomial, a node-specific constant $w_j = \prod_{m\neq j}(x_j - x_m)^{-1}$  (called the barycentric weight), and a part representing the displacement from $x_j$  to $x$ :

$$\ell_j(x) = \ell(x) \dfrac{w_j}{x - x_j}$$

By factoring $\ell(x)$ out from the sum, we can write the Lagrange polynomial in the so-called first barycentric form:


 * $$L(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j.$$

If the weights $$w_j$$ have been pre-computed, this requires only $$\mathcal O(k)$$ operations compared to $$\mathcal O(k^2)$$ for evaluating each Lagrange basis polynomial $$\ell_j(x)$$ individually.

The barycentric interpolation formula can also easily be updated to incorporate a new node $$x_{k+1}$$ by dividing each of the $$w_j$$, $$j=0 \dots k$$ by $$(x_j - x_{k+1})$$ and constructing the new $$w_{k+1}$$ as above.

For any $x,$ $\sum_{j=0}^k \ell_j(x) = 1$  because the constant function $g(x) = 1$  is the unique polynomial of degree $$\leq k$$ interpolating the data $\{(x_0, 1), (x_1, 1), \ldots, (x_k, 1) \}.$  We can thus further simplify the barycentric formula by dividing $$L(x) = L(x) / g(x)\colon$$


 * $$\begin{aligned}

L(x) &= \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j \Bigg/ \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j} \\[10mu] &= \sum_{j=0}^k \frac{w_j}{x-x_j}y_j \Bigg/ \sum_{j=0}^k \frac{w_j}{x-x_j}. \end{aligned}$$

This is called the second form or true form of the barycentric interpolation formula.

This second form has advantages in computation cost and accuracy: it avoids evaluation of $$\ell(x)$$; the work to compute each term in the denominator $$w_j/(x-x_j)$$ has already been done in computing $$\bigl(w_j/(x-x_j)\bigr)y_j$$ and so computing the sum in the denominator costs only $k$ addition operations; for evaluation points $x$  which are close to one of the nodes $x_j$, catastrophic cancelation would ordinarily be a problem for the value $(x-x_j)$ , however this quantity appears in both numerator and denominator and the two cancel leaving good relative accuracy in the final result.

Using this formula to evaluate $$L(x)$$ at one of the nodes $$x_j$$ will result in the indeterminate $$\infty y_j/\infty$$; computer implementations must replace such results by $$L(x_j) = y_j.$$

Each Lagrange basis polynomial can also be written in barycentric form:



\ell_j(x) = \frac{w_j}{x-x_j} \Bigg/ \sum_{m=0}^k \frac{w_m}{x-x_m}. $$

A perspective from linear algebra
Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial $L(x) = \sum_{j=0}^k x^j m_j$, we must invert the Vandermonde matrix $$(x_i)^j$$ to solve $$L(x_i) = y_i$$ for the coefficients $$m_j$$ of $$L(x)$$. By choosing a better basis, the Lagrange basis, $L(x) = \sum_{j=0}^k l_j(x) y_j$, we merely get the identity matrix, $\delta_{ij}$, which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.

This construction is analogous to the Chinese remainder theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.

Furthermore, when the order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial.

Example
We wish to interpolate $$f(x) = x^2$$ over the domain $$1 \leq x \leq 3$$ at the three nodes $\{1,\, 2,\, 3\}$:



\begin{align} x_0 & = 1, & & & y_0 = f(x_0) & = 1, \\[3mu] x_1 & = 2, & & & y_1 = f(x_1) & = 4, \\[3mu] x_2 & = 3, & & & y_2 = f(x_2) & =9. \end{align} $$

The node polynomial $$\ell$$ is
 * $$\ell(x) = (x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x - 6.$$

The barycentric weights are
 * $$\begin{align}

w_0 &= (1-2)^{-1}(1-3)^{-1} = \tfrac12, \\[3mu] w_1 &= (2-1)^{-1}(2-3)^{-1} = -1, \\[3mu] w_2 &= (3-1)^{-1}(3-2)^{-1} = \tfrac12. \end{align}$$

The Lagrange basis polynomials are


 * $$\begin{align}

\ell_0(x) &= \frac{x - 2}{1 - 2}\cdot\frac{x - 3}{1 - 3} = \tfrac12x^2 - \tfrac52x + 3, \\[5mu] \ell_1(x) &= \frac{x - 1}{2 - 1}\cdot\frac{x - 3}{2 - 3} = -x^2 + 4x - 3, \\[5mu] \ell_2(x) &= \frac{x - 1}{3 - 1}\cdot\frac{x - 2}{3 - 2} = \tfrac12x^2 - \tfrac32x + 1. \end{align}$$

The Lagrange interpolating polynomial is:
 * $$ \begin{align}

L(x) &= 1\cdot\frac{x - 2}{1 - 2}\cdot\frac{x - 3}{1 - 3} + 4\cdot\frac{x - 1}{2 - 1}\cdot\frac{x - 3}{2 - 3} + 9\cdot\frac{x - 1}{3 - 1}\cdot\frac{x - 2}{3 - 2} \\[6mu] &= x^2. \end{align} $$

In (second) barycentric form,



L(x) = \frac {\displaystyle \sum_{j=0}^2 \frac{w_j}{x-x_j}y_j} {\displaystyle \sum_{j=0}^2 \frac{w_j}{x-x_j}} = \frac {\displaystyle \frac{\tfrac12}{x - 1} + \frac{-4}{x - 2} + \frac{\tfrac92}{x - 3}} {\displaystyle \frac{\tfrac12}{x - 1} + \frac{-1}{x - 2} + \frac{\tfrac12}{x - 3}}. $$

Remainder in Lagrange interpolation formula
When interpolating a given function f by a polynomial of degree $k$ at the nodes $$x_0,...,x_k$$ we get the remainder $$R(x) = f(x) - L(x)$$ which can be expressed as


 * $$ R(x) = f[x_0,\ldots,x_k,x] \ell(x) = \ell(x) \frac{f^{(k+1)}(\xi)}{(k+1)!}, \quad \quad x_0 < \xi < x_k,$$

where $$f[x_0,\ldots,x_k,x]$$ is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as


 * $$R(x) = \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)(t-x_0) \cdots (t-x_k)} dt = \frac{\ell(x)}{2\pi i} \int_C \frac{f(t)}{(t-x)\ell(t)} dt.$$

The remainder can be bound as


 * $$|R(x)| \leq \frac{(x_k-x_0)^{k+1}}{(k+1)!}\max_{x_0 \leq \xi \leq x_k} |f^{(k+1)}(\xi)|. $$

Derivation
Clearly, $$R(x) $$ is zero at nodes. To find $$R(x)$$ at a point $$x_p $$, define a new function $$F(x)=R(x)-\tilde{R}(x)=f(x)-L(x)-\tilde{R}(x)$$ and choose $\tilde{R}(x)=C\cdot\prod_{i=0}^k(x-x_i)$   where $$C$$ is the constant we are required to determine for a given $$x_p$$. We choose $$C$$ so that $$F(x)$$ has $$k+2$$ zeroes (at all nodes and $$x_p$$) between $$x_0$$ and $$x_k$$ (including endpoints). Assuming that $$f(x)$$ is $$k+1$$-times differentiable, since $$L(x)$$ and $$\tilde{R}(x)$$ are polynomials, and therefore, are infinitely differentiable, $$F(x)$$ will be $$k+1$$-times differentiable. By Rolle's theorem, $$F^{(1)}(x)$$ has $$k+1$$ zeroes, $$F^{(2)}(x)$$ has $$k$$ zeroes... $$F^{(k+1)}$$ has 1 zero, say $$\xi,\, x_0<\xi<x_k$$. Explicitly writing $$F^{(k+1)}(\xi)$$:


 * $$F^{(k+1)}(\xi)=f^{(k+1)}(\xi)-L^{(k+1)}(\xi)-\tilde{R}^{(k+1)}(\xi)$$


 * $$L^{(k+1)}=0,\tilde{R}^{(k+1)}=C\cdot(k+1)!$$ (Because the highest power of $$x$$ in $$\tilde{R}(x)$$ is $$k+1$$)


 * $$0=f^{(k+1)}(\xi)-C\cdot(k+1)!$$

The equation can be rearranged as


 * $$C=\frac{f^{(k+1)}(\xi)}{(k+1)!}$$

Since $$F(x_p) = 0$$ we have $$R(x_p)=\tilde{R}(x_p) = \frac{f^{k+1}(\xi)}{(k+1)!}\prod_{i=0}^k(x_p-x_i)$$

Derivatives
The $d$th derivative of a Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials,


 * $$L^{(d)}(x) := \sum_{j=0}^{k} y_j \ell_j^{(d)}(x).$$

Recall (see above) that each Lagrange basis polynomial is

$$\begin{aligned} \ell_j(x) &= \prod_{\begin{smallmatrix}m = 0\\ m\neq j\end{smallmatrix}}^k \frac{x-x_m}{x_j-x_m}. \end{aligned}$$

The first derivative can be found using the product rule:


 * $$\begin{align}

\ell_j'(x) &= \sum_{\begin{smallmatrix}i=0 \\ i\not=j\end{smallmatrix}}^k \Biggl[ \frac{1}{x_j-x_i}\prod_{\begin{smallmatrix}m=0 \\ m\not = (i, j)\end{smallmatrix}}^k \frac{x-x_m}{x_j-x_m} \Biggr] \\[5mu] &= \ell_j(x)\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{x-x_i}. \end{align}$$

The second derivative is


 * $$\begin{align}

\ell_j''(x) &= \sum_{\begin{smallmatrix}i=0 \\ i\ne j\end{smallmatrix}}^{k} \frac{1}{x_j-x_i} \Biggl[ \sum_{\begin{smallmatrix}m=0 \\ m\ne(i,j)\end{smallmatrix}}^{k} \Biggl( \frac{1}{x_j-x_m}\prod_{\begin{smallmatrix}n=0 \\ n\ne(i,j,m)\end{smallmatrix}}^{k} \frac{x-x_n}{x_j-x_n} \Biggr) \Biggr] \\[10mu] &= \ell_j(x) \sum_{0 \leq i < m \leq k} \frac{2}{(x-x_i)(x - x_m)} \\[10mu] &= \ell_j(x)\Biggl[\Biggl(\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{x-x_i}\Biggr)^2-\sum_{\begin{smallmatrix}i=0 \\i\not=j\end{smallmatrix}}^k \frac{1}{(x-x_i)^2}\Biggr]. \end{align}$$

The third derivative is


 * $$\begin{align}

\ell_j'''(x) &= \ell_j(x) \sum_{0 \leq i < m < n \leq k} \frac{3!}{(x-x_i)(x - x_m)(x - x_n)} \end{align}$$

and likewise for higher derivatives.

Note that all of these formulas for derivatives are invalid at or near a node. A method of evaluating all orders of derivatives of a Lagrange polynomial efficiently at all points of the domain, including the nodes, is converting the Lagrange polynomial to power basis form and then evaluating the derivatives.

Finite fields
The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme.