User:TMM53/Asymptotic analysis



In mathematical analysis, asymptotic analysis, also known as asymptotics, is the development and application of methods that generate an approximate analytical solution to a mathematical problem when a variable or parameter assumes a value that is large, small or near a specified value.

An example of an asymptotic approximation is the function $\widetilde{y}(x)=x$ that accurately approximates the function $y(x)=x+e^{-x}$  for large positive $x$  values (Figure 1). For any desired accuracy, there is a corresponding range of $x$ values where this accuracy occurs. In this case, a chosen accuracy with a relative error of less than 1% occurs when the $x$ values are greater than 3.4.



History
Henri Poincaré and Thomas Joannes Stieltjes independently developed the foundations of asymptotic analysis in 1886 (Figures 2-3). Poincaré's focus was the "formal, analytic properties of those series" while Stieltjes's focus was to find "practical approximations for various functions and integrals."{{sfn|Boven| Wesselink|Wepster}|2012}} Poincaré later applied this approach in his work on celestial mechanics, developing techniques of continuing importance. Beginning in the early 20th century, asymptotic analysis became especially important in singular perturbation theory and the nonlinear equations of fluid mechanics. Subsequent developments have led to applications in many areas of mathematics including computer science, analysis of algorithms, differential equations, integrals, functions, series, partial sums, and difference equations.

Asymptotic relations
The continuous functions $f(z), g(z), h(z)$ and $$k(z)$$ of parameter or independent variable $z$  are defined on domain $D$  with element $L$  within the closure of $D$.

Big-O notation
The function $f(z)$ is of order $g(z)$  as $z$  approaches a finite number $L$, written with big-O notation as $f(z)=O(g(z)) \ (z \to L)$ , if there exists positive constant $M$ , independent of $z$ , and a neighborhood $V$  of $L$  meeting this condition. $$|f(z)| \le M |g(z)| \ \text{for} \ L \ \text{in} \ V$$ For $z$ approaching infinity, the big O-notation indicates there exists positive numbers $M$  and $N$  meeting this condition. $$ |f(z)| \le M |g(z)| \ \text{for} \ z > N$$ The big-O notation may apply to all elements $z$ in a set $S$. $$f(z)= O(g(z)) \ (z \in S) $$ If $g(z)$ is nonzero for $z$  near $L$, except possibly at $L$ , then $f(z)=O(g(z)) \ (z \to L)$  indicates that the quotient $|f(z)/g(z)|$  is bounded.

Little-o notation
The function $f(z)$ is much less than $g(z)$  as $z$  approaches $L$, written as $f(z) \ll g(z) \ (z \to L)$ , if for any positive number $c$  there is a neighborhood $V$  of $L$  meeting this condition. $$ |f(z)| \le c |g(z)| \ \text{for} \ z \ \text{in} \ V$$ The relation $f(z)$ is lower order than $g(z)$  as $z$  approaches $L$, written using little-o notation $f(z)=o(g(z)) \ (z \to L)$ , is identical to the much less than relation $f(z) \ll g(z) \ (z \to L)$.

If $g(z)$ is nonzero for $z$  near $L$, except possibly at $L$ , then $f(z) $  much less than $g(z) \ (z \to L)$  indicates that the the quotient $|f(z)/g(z)|$  has limit 0 as $z$  approaches $L$. $$ \operatorname{Lim} \ \frac{f(z)}{g(z)} = 0 \ \text{as} \ z \to L$$

Asymptotic equivalence
The function $f(z)$ is equivalent to $g(z) $  as $z$  approaches $L$, written as $f(z) \sim g(z) \ (z \to L)$ , if this condition holds. $$ f(z)=g(z)(1+o(1)) \ z \to L$$ If $g(z)$ is nonzero for $z$  near $L$, except possibly at $L$ , then $f(z) \sim g(z) \ (z \to L)$ indicates that the quotient $|f(z)/g(z)|$  has a limit 1 as $z$  approaches $L$. $$\operatorname{Lim} \ \frac{f(z)}{g(z)} = 1 \ \text{as} \ z \to L$$ For these asymptotic relations, the function $g(z)$ is called the gauge function.

Relation properties
The zero function, $f(z)=0$, can never be equivalent to any other function.

The much less than ($\ll$ ) relation has the partial ordering property defined as if $f(z) \ll g(z)$  and $g(z) \ll h(z)$  then $f(z) \ll h(z)$.

Asymptotic equivalence has reflexive,symmetric and transitive properties. Additional properties are
 * $f(z) \sim g(z) \ (z \to L \ )$ and $ \ r \ $  a real number implies $(f(z))^{r} \sim (g(z))^{r} \ (z \to L)$
 * $f(z) \sim g(z) \ (z \to L) \ $ and $$ \ h(z) \sim k(z) \ (z \to L) \ $$ implies $\frac{f(z)}{h(z)} \sim \frac{g(z)}{k(z)} \ (z \to L)$

Asymptotically equivalent functions remain asymptotically equivalent under integration if requirements related to convergence are met. There are more specific requirements for asymptotically equivalent functions to remain asymptotically equivalent under differentiation.



Asymptotic expansion
A sequence of functions, $\{ g_{0}(z), g_{1}(z), \ldots \}$, defined on domain $D$ is an asymptotic sequence (scale) as $z$  approaches $L$  if each function is much less than (lower order) than the preceding function of the sequence, $g_{m+1}(z) \ll g_{m}(z) \ (z \to L)$.

Given an asymptotic sequence, $\{ g_{0}(z), g_{1}(z), \ldots \}$, an asymptotic expansion (series) to $N$ terms of function $f(z)$  is defined as this series. $$f(z)= \sum^{N}_{m=0} a_{m} g_{m}(z)+o(g_{N}(z)) \ (z \to L)$$ An asymptotic representation is a 1-term asymptotic sequence.

A truncated asymptotic expansion is an asymptotic expansion containing a finite number of terms.

An asymptotic expansion of any number of terms, possibly infinite, has this form. $$f(z) \sim \sum^{\infty}_{m=0} a_{m} g_{m}(z) \ (z \to L)$$ Some use a more restricted definition, defining an asymptotic expansion as a series whose terms first decrease, reach a minimum and then increase for large variable values for all phases.

Asymptotic expansion properties
Divergence, optimal truncation and approximation error are 3 important properties that may occur for an asymptotic expansion. The function $F(z)$, a sum of an N-termed asymptotic asymptotic expansion $S_{N}(z)$  and error (remainder) term $E_{N}(z)$ , demonstrates these properties. $$ \begin{align} F(z)&=\int^{\infty}_{0} \frac{e^{-t}}{1+zt} \ dt \\ F(z)&=S_{N}(z)+E_{N}(z) \\ S_{N}(z)&=\int^{\infty}_{0} e^{-t} \Bigl \{ 1^{}-zt+\ldots+(-z)^{N} t^{N} \Bigr \} \ dt \\ S_{N}(z)&=\int^{\infty}_{0} e^{-t} dt+ (-z) \int^{\infty}_{0} t e^{-t} dt+\ldots+(-z)^{N}\int^{\infty}_{0} t^{N} e^{-t} dt \\ S_{N}(z)&=1-1! \ z+ \dots +N! \ (-z)^{N} \\ E_{N}(z)&=(-z)^{N+1} \int^{\infty}_{0} \frac{t^{N+1} \ e^{-t}}{1+zt} \ dt \\ F(z)&\sim S_{N}(z) \ (z \to 0) \\ \end{align} $$
 * E_{N}(z)|&<(N+1)! \ z^{N+1} \\

Divergence
The ratio test indicates that the asymptotic expansion $S_{N}(z) $ is divergent for all values of $z$. The series in curly brackets, $ \{1-zt+\ldots \}$, is a convergent series, providing an increasingly more accurate approximation of the integrand's denominator as the number of terms increase if $|zt|<1$ or equivalently for the finite range $|t| < 1/|z|$. If the integral determining the expansion coefficients occurred over the range of $t=0$ to $t <1/z$, the corresponding series expansion would converge. However, the integral occurs over a larger range, $t=0$ to $t=\infty$, leading to coefficients too large for series convergence. This leads to a divergent asymptotic expansion and the need to truncate the expansion after a finite number of terms. The limited range of convergence for a series used to construct the asymptotic expansion is a common cause for divergent asymptotic expansions.

Optimal truncation rule
The bound for asymptotic expansion error $E_{N}(z)$ is minimized if the number of leading retained terms in the asymptotic expansion is the integer closest to $1/z-1$. For many optimally truncated expansions, the number of retained terms is proportional to $1/z$. This corresponds to the optimal truncation rule : find the expansion's smallest term and truncate the expansion just before the smallest term. This rule commonly generalizes to other expansion types. A similar optimal truncation rule is to truncate the expansion by excluding all terms greater than the smallest term.

Beyond all orders feature
An optimally truncated expansion commonly has an error term with a factor, $e^{-q/z}$ with $q$  positive. This makes the approximation error a non-analytic function which a power series expansion cannot represent. This factor, absent in a power series expansion, is described as a beyond all orders feature.

Ordinary asymptotic approximation
The ordinary (Poincaré) asymptotic approximation is a function's asymptotic expansion truncated at a fixed number of terms unrelated to the function's parameter or variable $z$. This approximation may not contain the optimum number of terms to accurately approximate the function when the variable or parameter $z$ is in a specified range.

Superasymptotic approximation
The superasymptotic approximation is a function's optimally truncated asymptotic expansion. Superasymptotic approximations have an error on the order of $O(e^{-q/z})$ with $q$  a positive constant and a number of terms proportional to $1/z$.

Hyperasymptotic approximation
A hyperasymptotic approximation is an optimally truncated asymptotic approximation (superasymptotic approximation) with additional terms to correct the superasymptotic's error. This may require different "scaling assumptions" and leads improved accuracy. Darboux's theorem states that the late expansion terms will have a common form, a form closely approximated by an expansion arising from a single singularity, the function's singularity closest to the expansion's origin.

Regularisation
Regularization is "the removal of the infinity in the remainder of a divergent series; regularised values can be evaluated for elementary series outside their circles of absolute convergence." Instead of truncating a series and ignoring the terminal divergent part of the series, this terminal divergent series is assigned a regularised value, the terminant. This approach identifies an integral that would generate this same divergent series, evaluates this integral and assigns this value to the terminant. This is feasible because the integral is assigned a finite value using methods like the residue theorem. One approach relies on Borel summation and a second approach relies on the Mellin inversion theorem (Mellin-Barnes regularisation).

Asymptotic expansions from differential equations
For homogeneous linear differential equations, solutions may arise as Taylor series, and Frobenius series; asymptotic solutions may arise from dominant balance, phase integral (Wentzel–Kramers–Brillouin, Liouville–Green) and multiple-scale analysis methods. Asymptotic series also arise as perturbation series solutions. Using the Mellin transform, slowly converging series may be converted to accurate asymptotic series containing a small number of terms.

Asymptotic expansions from integrals
Asymptotic expansions approximating integrals are generated by these methods:
 * Taylor series
 * Integration by parts
 * Laplace's method
 * Watson's lemma
 * Stationary phase approximation
 * Method of steepest descent

Asymptotic expansions from sums
The Euler–Maclaurin formula generates an asymptotic expansion approximating a sum.

Summation of asymptotic expansions
There are methods that may accelerate the summation of slowly converging asymptotic expansions
 * Shanks transformation
 * Richardson extrapolation
 * Euler summation
 * Borel summation
 * Padé approximant

Converting a series to an integral
The sub-representation method may generate an integral representation from the function's asymptotic expansion. It may then be possible to use methods such as Laplace's method, stationary phase method or method of deepest descent to accurately evaluate this integral.

The function's asymptotic expansion is known
 * $$f(z)=\sum^{\infty}_{n=0} \ a_{n} z^{n}$$.

From a table of function series, a function with similar terms, called the kernel is selected
 * $$k(z)=\sum^{\infty}_{n=0} \ b_{n} z^{n}$$.

From another table, an appropriate sub-representation with functions $g(t)$ and $h(t)$  are selected that satisfies
 * $$\frac{a_{n}}{b_{n}}=\int \ (g(t))^{n} \ h(t) \ dt$$.

The integral representation is by means of a h-transform
 * $$f(z)= \int \ k(zg(t)) \ h(t) \ dt $$.

Differential equations
Asymptotic analysis is a key tool for exploring the ordinary and partial differential equations which arise in the mathematical modelling of real-world phenomena.

An illustrative example is the derivation of the boundary layer equations from the full Navier-Stokes equations governing fluid flow. In many cases, the asymptotic expansion is in power of a small parameter, $ε$: in the boundary layer case, this is the non-dimensional ratio of the boundary layer thickness to a typical length scale of the problem. Applications of asymptotic analysis in mathematical modelling often center around a non-dimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand.

Statistics and probability theory
In mathematical statistics and probability theory, asymptotics are used in analysis of long-run or large-sample behavior of random variables and estimators.

Asymptotic theory provides limiting approximations of the probability distribution of sample statistics, such as the likelihood ratio statistic and the expected value of the deviance. Asymptotic theory does not provide a method of evaluating the finite-sample distributions of sample statistics. However, non-asymptotic bounds are provided by methods of approximation theory.

In mathematical statistics, an asymptotic distribution is a hypothetical distribution that is in a sense the "limiting" distribution of a sequence of distributions. A distribution is an ordered set of random variables $m!!$ for $Z_{i}$, for some positive integer $i = 1, …, n$. An asymptotic distribution allows $n$ to range without bound, that is, $i$ is infinite.

A special case of an asymptotic distribution is when the late entries go to zero—that is, the $n$ go to 0 as $Z_{i}$ goes to infinity. Some instances of "asymptotic distribution" refer only to this special case.

This is based on the notion of an asymptotic function which cleanly approaches a constant value (the asymptote) as the independent variable goes to infinity; "clean" in this sense meaning that for any desired closeness epsilon there is some value of the independent variable after which the function never differs from the constant by more than epsilon.

The Edgeworth series provides an asymptotic approximations of probability distributions.

Geometry
An asymptote is a straight line that a curve approaches but never meets or crosses. Informally, one may speak of the curve meeting the asymptote "at infinity" although this is not a precise definition. In the equation $$y = \frac{1}{x},$$ y becomes arbitrarily small in magnitude as x increases.

Applied mathematics
In applied mathematics, asymptotic analysis is used to build numerical methods to approximate equation solutions.

Computer science
In computer science in the analysis of algorithms, considering the performance of algorithms.

Models of physical systems
Asymptotic analysis describes the behavior of physical systems, an example being statistical mechanics. Feynman graphs are an important tool in quantum field theory and the corresponding asymptotic expansions often do not converge.

Asymptotic analysis applies to accident analysis when identifying the causation of crash through count modeling with large number of crash counts in a given time and space.

Asymptotic versus Numerical Analysis
Debruijn illustrates the use of asymptotics in the following dialog between Miss N.A., a Numerical Analyst, and Dr. A.A., an Asymptotic Analyst: N.A.: I want to evaluate my function $$f(x)$$ for large values of $$x$$, with a relative error of at most 1%.

A.A.: $$f(x)=x^{-1}+\mathrm O(x^{-2}) \qquad (x\to\infty)$$.

N.A.: I am sorry, I don't understand.

A.A.: $$|f(x)-x^{-1}|<8x^{-2} \qquad (x>10^4).$$

N.A.: But my value of $$x$$ is only 100.

A.A.: Why did you not say so? My evaluations give"f(x)-x^{-1}"

N.A.: This is no news to me. I know already that $$0<f(100)<1$$.

A.A.: I can gain a little on some of my estimates. Now I find that"f(x)-x^{-1}"

N.A.: I asked for 1%, not for 20%.

A.A.: It is almost the best thing I possibly can get. Why don't you take larger values of $$x$$?

N.A.: !!! I think it's better to ask my electronic computing machine.

Machine: f(100) = 0.01137 42259 34008 67153

A.A.: Haven't I told you so? My estimate of 20% was not far off from the 14% of the real error.

N.A.: !!! . . . !

Some days later, Miss N.A. wants to know the value of f(1000), but her machine would take a month of computation to give the answer. She returns to her Asymptotic Colleague, and gets a fully satisfactory reply.