Radius of convergence

In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or $$\infty$$. When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function (singularities are those values of the argument for which the function is not defined), the radius of convergence is the shortest or minimum of all the respective distances (which are all non-negative numbers) calculated from the center of the disk of convergence to the respective singularities of the function.

Definition
For a power series f defined as:


 * $$f(z) = \sum_{n=0}^\infty c_n (z-a)^n, $$

where


 * a is a complex constant, the center of the disk of convergence,
 * cn is the n-th complex coefficient, and
 * z is a complex variable.

The radius of convergence r is a nonnegative real number or $$\infty$$ such that the series converges if


 * $$|z-a| < r$$

and diverges if


 * $$|z-a| > r.$$

Some may prefer an alternative definition, as existence is obvious:


 * $$r=\sup \left\{ |z-a|\ \left|\ \sum_{n=0}^\infty c_n(z-a)^n\ \text{ converges } \right.\right\} $$

On the boundary, that is, where |z − a| = r, the behavior of the power series may be complicated, and the series may converge for some values of z and diverge for others. The radius of convergence is infinite if the series converges for all complex numbers z.

Finding the radius of convergence
Two cases arise:


 * The first case is theoretical: when you know all the coefficients $$c_n$$ then you take certain limits and find the precise radius of convergence.
 * The second case is practical: when you construct a power series solution of a difficult problem you typically will only know a finite number of terms in a power series, anywhere from a couple of terms to a hundred terms. In this second case, extrapolating a plot estimates the radius of convergence.

Theoretical radius
The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number


 * $$C = \limsup_{n\to\infty}\sqrt[n]{|c_n(z-a)^n|} = \limsup_{n\to\infty} \left(\sqrt[n]{|c_n|}\right) |z-a|$$

"lim sup" denotes the limit superior. The root test states that the series converges if C < 1 and diverges if C > 1. It follows that the power series converges if the distance from z to the center a is less than


 * $$r = \frac{1}{\limsup_{n\to\infty}\sqrt[n]{|c_n|}}$$

and diverges if the distance exceeds that number; this statement is the Cauchy–Hadamard theorem. Note that r = 1/0 is interpreted as an infinite radius, meaning that f is an entire function.

The limit involved in the ratio test is usually easier to compute, and when that limit exists, it shows that the radius of convergence is finite.


 * $$r = \lim_{n\to\infty} \left| \frac{c_{n}}{c_{n+1}} \right|.$$

This is shown as follows. The ratio test says the series converges if


 * $$ \lim_{n\to\infty} \frac{|c_{n+1}(z-a)^{n+1}|}{|c_n(z-a)^n|} < 1. $$

That is equivalent to


 * $$ |z - a| < \frac{1}{\lim_{n\to\infty} \frac{|c_{n+1}|}{|c_n|}} = \lim_{n\to\infty} \left|\frac{c_n}{c_{n+1}}\right|. $$

Practical estimation of radius in the case of real coefficients
Usually, in scientific applications, only a finite number of coefficients $$c_n$$ are known. Typically, as $$n$$ increases, these coefficients settle into a regular behavior determined by the nearest radius-limiting singularity. In this case, two main techniques have been developed, based on the fact that the coefficients of a Taylor series are roughly exponential with ratio $$1/r$$ where r is the radius of convergence.


 * The basic case is when the coefficients ultimately share a common sign or alternate in sign. As pointed out earlier in the article, in many cases the limit $\lim_{n\to \infty} {c_n / c_{n-1}}$ exists, and in this case $1/r = \lim_{n \to \infty} {c_n / c_{n-1}}$ . Negative $$r$$ means the convergence-limiting singularity is on the negative axis. Estimate this limit, by plotting the $$c_n/c_{n-1}$$ versus $$1/n$$, and graphically extrapolate to $$1/n=0$$ (effectively $$n=\infty$$) via a linear fit.  The intercept with $$1/n=0$$ estimates the reciprocal of the radius of convergence, $$1/r$$.  This plot is called a Domb–Sykes plot.
 * The more complicated case is when the signs of the coefficients have a more complex pattern. Mercer and Roberts proposed the following procedure. Define the associated sequence $$b_n^2=\frac{c_{n+1}c_{n-1} - c_n^2}{c_n c_{n-2} - c_{n-1}^2} \quad n=3,4,5,\ldots.$$ Plot the finitely many known $$b_n$$ versus $$1/n$$, and graphically extrapolate to $$1/n=0$$ via a linear fit.  The intercept with $$1/n=0$$ estimates the reciprocal of the radius of convergence, $$1/r$$. This procedure also estimates two other characteristics of the convergence limiting singularity.  Suppose the nearest singularity is of degree $$p$$ and has angle $$\pm\theta$$ to the real axis.  Then the slope of the linear fit given above is $$-(p+1)/r$$.  Further, plot $\frac{1}{2} \left(\frac{c_{n-1}b_n}{c_n} + \frac{c_{n+1}}{c_n b_n}\right)$  versus $1/n^2$, then a linear fit extrapolated to $1/n^2=0$  has intercept at $$\cos\theta$$.

Radius of convergence in complex analysis
A power series with a positive radius of convergence can be made into a holomorphic function by taking its argument to be a complex variable. The radius of convergence can be characterized by the following theorem:


 * The radius of convergence of a power series f centered on a point a is equal to the distance from a to the nearest point where f cannot be defined in a way that makes it holomorphic.

The set of all points whose distance to a is strictly less than the radius of convergence is called the disk of convergence.



The nearest point means the nearest point in the complex plane, not necessarily on the real line, even if the center and all coefficients are real. For example, the function


 * $$f(z)=\frac 1 {1+z^2}$$

has no singularities on the real line, since $$1+z^2$$ has no real roots. Its Taylor series about 0 is given by


 * $$\sum_{n=0}^\infty (-1)^n z^{2n}.$$

The root test shows that its radius of convergence is 1. In accordance with this, the function f(z) has singularities at ±i, which are at a distance 1 from 0.

For a proof of this theorem, see analyticity of holomorphic functions.

A simple example
The arctangent function of trigonometry can be expanded in a power series:


 * $$\arctan(z)=z-\frac{z^3} 3 + \frac{z^5} 5 -\frac{z^7} 7 +\cdots .$$

It is easy to apply the root test in this case to find that the radius of convergence is 1.

A more complicated example
Consider this power series:


 * $$\frac z {e^z-1}=\sum_{n=0}^\infty \frac{B_n}{n!} z^n $$

where the rational numbers Bn are the Bernoulli numbers. It may be cumbersome to try to apply the ratio test to find the radius of convergence of this series. But the theorem of complex analysis stated above quickly solves the problem. At z = 0, there is in effect no singularity since the singularity is removable. The only non-removable singularities are therefore located at the other points where the denominator is zero. We solve


 * $$e^z - 1 = 0$$

by recalling that if $z = x + iy$ and $e = cos(y) + i sin(y)$ then


 * $$e^z = e^x e^{iy} = e^x(\cos(y)+i\sin(y)),$$

and then take x and y to be real. Since y is real, the absolute value of $cos(y) + i sin(y)$ is necessarily 1. Therefore, the absolute value of e can be 1 only if e is 1; since x is real, that happens only if x = 0. Therefore z is purely imaginary and $cos(y) + i sin(y) = 1$. Since y is real, that happens only if cos(y) = 1 and sin(y) = 0, so that y is an integer multiple of 2$\pi$. Consequently the singular points of this function occur at


 * z = a nonzero integer multiple of 2πi.

The singularities nearest 0, which is the center of the power series expansion, are at ±2πi. The distance from the center to either of those points is 2π, so the radius of convergence is 2π.

Convergence on the boundary
If the power series is expanded around the point a and the radius of convergence is $r$, then the set of all points $z$ such that $|z − a| = r$ is a circle called the boundary of the disk of convergence. A power series may diverge at every point on the boundary, or diverge on some points and converge at other points, or converge at all the points on the boundary. Furthermore, even if the series converges everywhere on the boundary (even uniformly), it does not necessarily converge absolutely.

Example 1: The power series for the function $f(z) = 1/(1 − z)$, expanded around $z = 0$, which is simply
 * $$ \sum_{n=0}^\infty z^n,$$

has radius of convergence 1 and diverges at every point on the boundary.

Example 2: The power series for $g(z) = −ln(1 − z)$, expanded around $z = 0$, which is
 * $$ \sum_{n=1}^\infty \frac{1}{n} z^n,$$

has radius of convergence 1, and diverges for $z = 1$ but converges for all other points on the boundary. The function $f(z)$ of Example 1 is the derivative of $g(z)$.

Example 3: The power series
 * $$ \sum_{n=1}^\infty \frac 1 {n^2} z^n $$

has radius of convergence 1 and converges everywhere on the boundary absolutely. If $h$ is the function represented by this series on the unit disk, then the derivative of h(z) is equal to g(z)/z with g of Example 2. It turns out that $h(z)$ is the dilogarithm function.

Example 4: The power series
 * $$\sum_{i=1}^\infty a_i z^i \text{ where } a_i = \frac{(-1)^{n-1}}{2^nn}\text{ for } n = \lfloor\log_2(i)\rfloor+1\text{, the unique integer with }2^{n-1}\le i < 2^n,$$

has radius of convergence 1 and converges uniformly on the entire boundary $|z| = 1$, but does not converge absolutely on the boundary.

Rate of convergence
If we expand the function


 * $$\sin x = \sum^{\infty}_{n=0} \frac{(-1)^n}{(2n+1)!} x^{2n+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots\text{ for all } x$$

around the point x = 0, we find out that the radius of convergence of this series is $$\infty$$ meaning that this series converges for all complex numbers. However, in applications, one is often interested in the precision of a numerical answer. Both the number of terms and the value at which the series is to be evaluated affect the accuracy of the answer. For example, if we want to calculate $sin(0.1)$ accurate up to five decimal places, we only need the first two terms of the series. However, if we want the same precision for $x = 1$ we must evaluate and sum the first five terms of the series. For $sin(10)$, one requires the first 18 terms of the series, and for $sin(100)$ we need to evaluate the first 141 terms.

So for these particular values the fastest convergence of a power series expansion is at the center, and as one moves away from the center of convergence, the rate of convergence slows down until you reach the boundary (if it exists) and cross over, in which case the series will diverge.

Abscissa of convergence of a Dirichlet series
An analogous concept is the abscissa of convergence of a Dirichlet series


 * $$\sum_{n=1}^\infty \frac{a_n}{n^s}.$$

Such a series converges if the real part of s is greater than a particular number depending on the coefficients an: the abscissa of convergence.