User:PAR/Work12

Table of nascent delta functions
One often imposes symmetry or positivity on the nascent delta functions. Positivity is important because, if a function has integral 1 and is non-negative (i.e., is a probability distribution), then convolving with it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function.

Some nascent delta functions are:
 * {| border="0" cellpadding="5" cellspacing="10" align="left"

=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i} k x-|\epsilon k|}\;dk $$ =\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{e^{ikx}}{1+\epsilon^2k^2}\,dk$$ \frac{1}{\epsilon},&-\frac{\epsilon}{2}<x<\frac{\epsilon}{2}\\ 0,&\mbox{otherwise} \end{cases} =\frac{1}{2\pi}\int_{-\infty}^\infty \textrm{sinc} \left( \frac{\epsilon k}{2 \pi} \right) e^{ikx}\,dk $$ \eta_\epsilon(x)=\frac{1}{\pi x}\sin\left(\frac{x}{\epsilon}\right) =\frac{1}{2\pi}\int_{-1/\epsilon}^{1/\epsilon} \cos (k x)\;dk $$ \eta_\epsilon(x)=\partial_x \frac{1}{1+\mathrm{e}^{-x/\epsilon}} =-\partial_x \frac{1}{1+\mathrm{e}^{x/\epsilon}} $$ \eta_\epsilon(x)=\frac{\epsilon}{\pi x^2}\sin^2\left(\frac{x}{\epsilon}\right) $$ \eta_\epsilon(x) = \frac{1}{\epsilon}A_i\left(\frac{x}{\epsilon}\right) $$ \eta_\epsilon(x) = \frac{1}{\epsilon}J_{1/\epsilon} \left(\frac{x+1}{\epsilon}\right) $$ \frac{2}{\pi \epsilon^2}\sqrt{\epsilon^2 - x^2},&-\epsilon
 * $$\eta_\epsilon(x) = \frac{1}{\epsilon \sqrt{\pi}} \mathrm{e}^{-x^2/\epsilon^2}$$
 * Limit of a normal distribution
 * $$\eta_\epsilon(x) = \frac{1}{\pi} \frac{\epsilon}{\epsilon^2 + x^2}
 * $$\eta_\epsilon(x) = \frac{1}{\pi} \frac{\epsilon}{\epsilon^2 + x^2}
 * $$\eta_\epsilon(x) = \frac{1}{\pi} \frac{\epsilon}{\epsilon^2 + x^2}
 * Limit of a Cauchy distribution
 * $$\eta_\epsilon(x)=\frac{e^{-|x/\epsilon|}}{2\epsilon}
 * $$\eta_\epsilon(x)=\frac{e^{-|x/\epsilon|}}{2\epsilon}
 * Cauchy φ (see note below)
 * $$\eta_\epsilon(x)= \frac{\textrm{rect}(x/\epsilon)}{\epsilon} =\begin{cases}
 * $$\eta_\epsilon(x)= \frac{\textrm{rect}(x/\epsilon)}{\epsilon} =\begin{cases}
 * Limit of a rectangular function
 * Limit of the sinc function (or Fourier transform of the rectangular function; see note below)
 * Derivative of the sigmoid (or Fermi-Dirac) function
 * Limit of the sinc-squared function
 * Limit of the Airy function
 * Limit of a Bessel function
 * $$\eta_\epsilon(x)= \begin{cases}
 * $$\eta_\epsilon(x)= \begin{cases}
 * Limit of the Wigner semicircle distribution (This nascent delta function has the advantage that, for all nonzero $$a$$, it has compact support and is continuous. It is not smooth, however, and thus not a mollifier.)
 * $$\eta_\epsilon(x) = \frac{\Psi(x/\epsilon)}{\int_{-\infty}^\infty \Psi(x/\epsilon)\,dx}
 * $$\eta_\epsilon(x) = \frac{\Psi(x/\epsilon)}{\int_{-\infty}^\infty \Psi(x/\epsilon)\,dx}
 * This is a mollifier: Ψ is a bump function (smooth, compactly supported), and the nascent delta function is just scaling this and normalizing so it has integral 1.
 * }
 * }

Note: If &eta;(&epsilon;,x) is a nascent delta function which is a probability distribution over the whole real line (i.e. is always non-negative between -∞ and +∞) then another nascent delta function &eta;φ(&epsilon;, x) can be built from its characteristic function as follows:


 * $$\eta_\varphi(\epsilon,x)=\frac{1}{2\pi}~\frac{\varphi(1/\epsilon,x)}{\eta(1/\epsilon,0)}$$

where


 * $$\varphi(\epsilon,k)=\int_{-\infty}^\infty \eta(\epsilon,x)e^{-ikx}\,dx$$

is the characteristic function of the nascent delta function &eta;(&epsilon;, x). This result is related to the localization property of the continuous Fourier transform.

There are also series and integral representations of the Dirac delta function in terms of special functions, such as integrals of products of Airy functions, of Bessel functions, of Coulomb wave functions and of parabolic cylinder functions, and also series of products of orthogonal polynomials.

Extensions for L = 1
As seen in the previous example, the ratio test may be inconclusive when the limit of the ratio is 1. Extensions to the ratio test, however, sometimes allows one to deal with this case.

In all the tests below we assume that Σan is a sum with positive an. These tests also may be applied to any series with a finite number of negative terms. Any such series may be written as:


 * $$\sum_{n=1}^\infty a_n = \sum_{n=1}^N a_n+\sum_{n=N+1}^\infty a_n$$

where aN is the highest-indexed negative term. The first expression on the right is a partial sum which will be finite, and so the convergence of the entire series will be determined by the convergence properties of the second expression on the right, which may be re-indexed to form a series of all positive terms beginning at n=1.

Each test defines a test parameter (&rho;n) which specifies the behavior of that parameter needed to establish convergence or divergence. For each test, a weaker form of the test exists which will instead place restrictions upon limn->∞&rho;n.

All of the tests have regions in which they fail to describe the convergence properties of &sum;an. In fact, no convergence test can fully describe the convergence properties of the series. This is because if &sum;an is convergent, a second convergent series &sum;bn can be found which converges more slowly: i.e., it has the property that limn->∞ (bn/an) = ∞. Furthermore, if &sum;an is divergent, a second divergent series &sum;bn can be found which diverges more slowly: i.e., it has the property that limn->∞ (bn/an) = 0. Convergence tests essentially use the comparison test on some particular family of an, and fail for sequences which converge or diverge more slowly.

The De Morgan Heirarchy
Augustus De Morgan proposed a hierarchy of ratio-type tests

The ratio test parameters ($$\rho_n$$) below all generally involve terms of the form $$D_n a_n/a_{n+1}-D_{n+1}$$. This term may be multiplied by $$a_{n+1}/a_n$$ to yield $$D_n-D_{n+1}a_{n+1}/a_n$$. This term can replace the former term in the definition of the test parameters and the conclusions drawn will remain the same. Accordingly, there will be no distinction drawn between references which use one or the other form of the test parameter.

1. d’Alembert’s Ratio test
The first test in the De Morgan Heirarchy is the ratio test as described above.

2. Raabe's test
This extension is due to Joseph Ludwig Raabe. Define:


 * $$\rho_n \equiv n\left(\frac{a_n}{a_{n+1}}-1\right)$$

The series will:


 * Converge when there exists a c > 1 such that $$\rho_n \ge c$$ for all n > N.
 * Diverge when $$\rho_n \le 1$$ for all n > N.
 * Otherwise, the test is inconclusive.

Defining $$\rho=\lim_{n\to\infty}\rho_n$$, the limit version states that the series will :


 * Converge if &rho; > 1 (this includes the case &rho; = ∞)
 * Diverge if &rho; < 1.
 * If &rho;=1, the test is inconclusive.

When the above limit does not exist, it may be possible to use limits superior and inferior. The series will:


 * Converge if $$\liminf_{n \to \infty} \rho_n > 1$$
 * Diverge if $$\limsup_{n \rightarrow \infty} \rho_n < 1$$
 * Otherwise, the test is inconclusive.

Proof of Raabe's test
Defining $$\rho_n \equiv n\left(\frac{a_n}{a_{n+1}}-1\right)$$, we need not assume the limit exists; if $$\limsup\rho_n<1$$, then $$\sum a_n$$ diverges, while if $$\liminf \rho_n>1$$ the sum converges.

The proof proceeds essentially by comparison with $$\sum1/n^R$$. Suppose first that $$\limsup\rho_n<1$$. Of course if $$\limsup\rho_n<0$$ then $$a_{n+1}\ge a_n$$ for large $$n$$, so the sum diverges; assume then that $$0\le\limsup\rho_n<1$$. There exists $$R<1$$ such that $$\rho_n\le R$$ for all $$n\ge N$$, which is to say that $$a_{n}/a_{n+1}\le \left(1+\frac Rn\right)\le e^{R/n}$$. Thus $$a_{n+1}\ge a_ne^{-R/n}$$, which implies that $$a_{n+1}\ge a_Ne^{-R(1/N+\dots+1/n)}\ge ca_Ne^{-R\log(n)}=ca_N/n^R$$ for $$n\ge N$$; since $$R<1$$ this shows that $$\sum a_n$$ diverges.

The proof of the other half is entirely analogous, with most of the inequalities simply reversed. We need a preliminary inequality to use in place of the simple $$1+t1$$. Arguing as in the first paragraph, using the inequality established in the previous paragraph, we see that there exists $$R>1$$ such that $$a_{n+1}\le ca_Nn^{-R}$$ for $$n\ge N$$; since $$R>1$$ this shows that $$\sum a_n$$ converges.

3. Bertrand’s test
This extension is due to Joseph Bertrand and Augustus De Morgan.

Defining:


 * $$\rho_n \equiv n \ln n\left(\frac{a_n}{a_{n+1}}-1\right)-\ln n$$

Bertrand's test asserts that the series will:


 * Converge when there exists a c > 1 such that $$\rho_n \ge c$$ for all n > N.
 * Diverge when $$\rho_n \le 1$$ for all n > N.
 * Otherwise, the test is inconclusive.

Defining $$\rho=\lim_{n\to\infty}\rho_n$$, the limit version states that the series will:


 * Converge if &rho; > 1 (this includes the case &rho; = ∞)
 * Diverge if &rho; < 1.
 * If &rho;=1, the test is inconclusive.

When the above limit does not exist, it may be possible to use limits superior and inferior. The series will:


 * Converge if $$\liminf \rho_n > 1$$
 * Diverge if $$\limsup \rho_n < 1$$
 * Otherwise, the test is inconclusive.

4. Gauss’s test
This extension is due to Carl Friedrich Gauss.

Assuming an > 0 and r > 1, if a bounded sequence B n  can be found such that for all n:   :


 * $$\frac{a_n}{a_{n+1}}=1+\frac{\rho}{n}+\frac{B_n}{n^r}$$

then the series will:


 * Converge if $$\rho>1$$
 * Diverge if $$\rho \le 1$$

5. Kummer’s test
This extension is due to Ernst Kummer.

Let ζn be an auxiliary sequence of positive constants. Define:


 * $$\rho_n \equiv \left(\zeta_n \frac{a_n}{a_{n+1}} - \zeta_{n+1}\right)$$

Kummer's test states that the series will:  :


 * Converge if there exists a c > 0 such that $$\rho_n \ge c$$ for all n > N.
 * Diverge if $$\rho_n \le 0$$ for all n > N and $$\sum_{n=1}^\infty 1/\zeta_n$$ diverges.
 * Otherwise, the test is inconclusive

Defining $$\rho=\lim_{n\to\infty}\rho_n$$, the limit version states that the series will :


 * Converge if &rho; > 0
 * Diverge if &rho; < 0 and $$\sum_{n=1}^\infty 1/\zeta_n$$ diverges.
 * If &rho;=0, the test is inconclusive.

When the above limit does not exist, it may be possible to use limits superior and inferior. The series will:


 * Converge if $$\liminf_{n \to \infty} \rho_n >0$$
 * Diverge if $$\limsup_{n \to \infty} \rho_n <0$$ and $$\sum 1/\zeta_n$$ diverges.
 * Otherwise, the test is inconclusive

Special Cases
All of tests in De Morgan’s heirarchy except Gauss’s test can easily be seen as special cases of Kummer’s test:


 * For the ratio test, let &zeta;n=1. Then:
 * $$\rho_{Kummer} = \left(\frac{a_n}{a_{n+1}}-1\right) = 1/\rho_{Ratio}-1$$


 * For Raabe’s test, let &zeta;n=n. Then:
 * $$\rho_{Kummer} = \left(n\frac{a_n}{a_{n+1}}-(n+1)\right) = \rho_{Raabe}-1$$


 * For Bertrand’s test, let &zeta;n=n ln(n). Then:
 * $$\rho_{Kummer} = n \ln(n)\left(\frac{a_n}{a_{n+1}}-1\right)-(n+1)\ln(n+1)$$
 * Using $$\ln(n+1)=\ln(n)+\ln(1+1/n)$$ and approximating $$\ln(1+1/n)\rightarrow 1/n$$ for large n, which is negligible compared to the other terms, &rho;Kummer may be written:
 * $$\rho_{Kummer} = n \ln(n)\left(\frac{a_n}{a_{n+1}}-1\right)-\ln(n)-1 = \rho_{Bertrand}-1$$

Note that for these three tests, the higher they are in the De Morgan heirarchy, the more slowly the 1/&zeta;n series diverges.

Proof of Kummer's test
If $$\rho_n>0$$ then fix a positive number $$0<\delta<\rho_n$$. There exists a natural number $$N$$ such that for every $$n>N,$$
 * $$\delta\leq\zeta_{n}\frac{a_{n}}{a_{n+1}}-\zeta_{n+1}.$$

Since $$a_{n+1}>0$$, for every $$n> N,$$
 * $$0\leq \delta a_{n+1}\leq \zeta_{n}a_{n}-\zeta_{n+1}a_{n+1}.$$

In particular $$\zeta_{n+1}a_{n+1}\leq \zeta_{n}a_{n}$$ for all $$n\geq N$$ which means that starting from the index $$N$$ the sequence $$\zeta_{n}a_{n}>0$$ is monotonically decreasing and positive which in particular implies that it is bounded below by 0. Therefore the limit
 * $$\lim_{n\to\infty}\zeta_{n}a_{n}=L$$ exists.

This implies that the positive telescoping series
 * $$\sum_{n=1}^{\infty}\left(\zeta_{n}a_{n}-\zeta_{n+1}a_{n+1}\right)$$ is convergent,

and since for all $$n>N,$$
 * $$\delta a_{n+1}\leq \zeta_{n}a_{n}-\zeta_{n+1}a_{n+1}$$

by the direct comparison test for positive series, the series $$\sum_{n=1}^{\infty}\delta a_{n+1}$$ is convergent.

On the other hand, if $$\rho<0$$, then there is an N such that $$\zeta_n a_n$$ is increasing for $$n>N$$. In particular, there exists an $$\epsilon>0$$ for which $$\zeta_n a_n>\epsilon$$ for all $$n>N$$, and so $$\sum_n a_n=\sum_n \frac{a_n\zeta_n}{\zeta_n}$$ diverges by comparison with $$\sum_n \frac \epsilon {\zeta_n}$$.

The Second Ratio Test
A more refined ratio test is the second ratio test: For $$a_n>0$$ define:

By the second ratio test, the series will:


 * Converge if $$L<\frac{1}{2}$$
 * Diverge if $$L>\frac{1}{2}$$
 * If $$L=\frac{1}{2}$$ then the test is inconclusive.

If the above limits do not exist, it may be possible to use the limits superior and inferior. Define:

Then the series will:


 * Converge if $$L<\frac{1}{2}$$
 * Diverge if $$\ell>\frac{1}{2}$$
 * If $$\ell \le \frac{1}{2} \le L$$ then the test is inconclusive.

The second ratio test can be generalized to an m-th ratio test, but higher orders are not found to be as useful.