Analytic number theory



In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dirichlet L-functions to give the first proof of Dirichlet's theorem on arithmetic progressions. It is well known for its results on prime numbers (involving the Prime Number Theorem and Riemann zeta function) and additive number theory (such as the Goldbach conjecture and Waring's problem).

Branches of analytic number theory
Analytic number theory can be split up into two major parts, divided more by the type of problems they attempt to solve than fundamental differences in technique.
 * Multiplicative number theory deals with the distribution of the prime numbers, such as estimating the number of primes in an interval, and includes the prime number theorem and Dirichlet's theorem on primes in arithmetic progressions.
 * Additive number theory is concerned with the additive structure of the integers, such as Goldbach's conjecture that every even number greater than 2 is the sum of two primes. One of the main results in additive number theory is the solution to Waring's problem.

Precursors
Much of analytic number theory was inspired by the prime number theorem. Let π(x) be the prime-counting function that gives the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / ln(x) is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x / ln(x) as x approaches infinity is 1:


 * $$\lim_{x\to\infty}\frac{\pi(x)}{x/\ln(x)}=1,$$

known as the asymptotic law of distribution of prime numbers.

Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a/(A ln(a) + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B ≈ &minus;1.08366. Carl Friedrich Gauss considered the same question: "Im Jahr 1792 oder 1793" ('in the year 1792 or 1793'), according to his own recollection nearly sixty years later in a letter to Encke (1849), he wrote in his logarithm table (he was then 15 or 16) the short note "Primzahlen unter $$a(=\infty) \frac a{\ln a}$$" ('prime numbers under $$a(=\infty) \frac a{\ln a}$$'). But Gauss never published this conjecture. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral li(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / ln(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients.

Dirichlet
Johann Peter Gustav Lejeune Dirichlet is credited with the creation of analytic number theory, a field in which he found several deep results and in proving them introduced some fundamental tools, many of which were later named after him. In 1837 he published Dirichlet's theorem on arithmetic progressions, using mathematical analysis concepts to tackle an algebraic problem and thus creating the branch of analytic number theory. In proving the theorem, he introduced the Dirichlet characters and L-functions. In 1841 he generalized his arithmetic progressions theorem from integers to the ring of Gaussian integers $$\mathbb{Z}[i]$$.

Chebyshev
In two papers from 1848 and 1850, the Russian mathematician Pafnuty L'vovich Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function ζ(s) (for real values of the argument "s", as are works of Leonhard Euler, as early as 1737) predating Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit of π(x)/(x/ln(x)) as x goes to infinity exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near to 1 for all x. Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for π(x) were strong enough for him to prove Bertrand's postulate that there exists a prime number between n and 2n for any integer n ≥ 2.

Riemann
Bernhard Riemann made some famous contributions to modern analytic number theory. In a single short paper (the only one he published on the subject of number theory), he investigated the Riemann zeta function and established its importance for understanding the distribution of prime numbers. He made a series of conjectures about properties of the zeta function, one of which is the well-known Riemann hypothesis.

Hadamard and de la Vallée-Poussin
Extending the ideas of Riemann, two proofs of the prime number theorem were obtained independently by Jacques Hadamard and Charles Jean de la Vallée-Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function ζ(s) is non-zero for all complex values of the variable s that have the form s = 1 + it with t > 0.

Modern times
The biggest technical change after 1950 has been the development of sieve methods, particularly in multiplicative problems. These are combinatorial in nature, and quite varied. The extremal branch of combinatorial theory has in return been greatly influenced by the value placed in analytic number theory on quantitative upper and lower bounds. Another recent development is probabilistic number theory, which uses methods from probability theory to estimate the distribution of number theoretic functions, such as how many prime divisors a number has.

Specifically, the breakthroughs by Yitang Zhang, James Maynard, Terence Tao and Ben Green have all used the Goldston–Pintz–Yıldırım method, which they originally used to prove that

$$p_{n+1}-p_n \geq o(\log p_n).$$

Developments within analytic number theory are often refinements of earlier techniques, which reduce the error terms and widen their applicability. For example, the circle method of Hardy and Littlewood was conceived as applying to power series near the unit circle in the complex plane; it is now thought of in terms of finite exponential sums (that is, on the unit circle, but with the power series truncated). The needs of Diophantine approximation are for auxiliary functions that are not generating functions—their coefficients are constructed by use of a pigeonhole principle—and involve several complex variables. The fields of Diophantine approximation and transcendence theory have expanded, to the point that the techniques have been applied to the Mordell conjecture.

Problems and results
Theorems and results within analytic number theory tend not to be exact structural results about the integers, for which algebraic and geometrical tools are more appropriate. Instead, they give approximate bounds and estimates for various number theoretical functions, as the following examples illustrate.

Multiplicative number theory
Euclid showed that there are infinitely many prime numbers. An important question is to determine the asymptotic distribution of the prime numbers; that is, a rough description of how many primes are smaller than a given number. Gauss, amongst others, after computing a large list of primes, conjectured that the number of primes less than or equal to a large number N is close to the value of the integral


 * $$\int^N_2 \frac{1}{\log t} \, dt.$$

In 1859 Bernhard Riemann used complex analysis and a special meromorphic function now known as the Riemann zeta function to derive an analytic expression for the number of primes less than or equal to a real number x. Remarkably, the main term in Riemann's formula was exactly the above integral, lending substantial weight to Gauss's conjecture. Riemann found that the error terms in this expression, and hence the manner in which the primes are distributed, are closely related to the complex zeros of the zeta function. Using Riemann's ideas and by getting more information on the zeros of the zeta function, Jacques Hadamard and Charles Jean de la Vallée-Poussin managed to complete the proof of Gauss's conjecture. In particular, they proved that if
 * $$\pi(x) = (\text{number of primes }\leq x),$$

then


 * $$\lim_{x \to \infty} \frac{\pi(x)}{x/\log x} = 1.$$

This remarkable result is what is now known as the prime number theorem. It is a central result in analytic number theory. Loosely speaking, it states that given a large number N, the number of primes less than or equal to N is about N/log(N).

More generally, the same question can be asked about the number of primes in any arithmetic progression a+nq for any integer n. In one of the first applications of analytic techniques to number theory, Dirichlet proved that any arithmetic progression with a and q coprime contains infinitely many primes. The prime number theorem can be generalised to this problem; letting
 * $$\pi(x, a, q) = (\text {number of primes } \leq x \text{ such that } p \text{ is in the arithmetic progression } a + nq, n \in \mathbf Z), $$

then if a and q are coprime,


 * $$\lim_{x \to \infty} \frac{\pi(x,a,q)\phi(q)}{x/\log x} = 1.$$

There are also many deep and wide-ranging conjectures in number theory whose proofs seem too difficult for current techniques, such as the twin prime conjecture which asks whether there are infinitely many primes p such that p + 2 is prime. On the assumption of the Elliott–Halberstam conjecture it has been proven recently that there are infinitely many primes p such that p + k is prime for some positive even k at most 12. Also, it has been proven unconditionally (i.e. not depending on unproven conjectures) that there are infinitely many primes p such that p + k is prime for some positive even k at most 246.

Additive number theory
One of the most important problems in additive number theory is Waring's problem, which asks whether it is possible, for any k ≥ 2, to write any positive integer as the sum of a bounded number of kth powers,


 * $$n=x_1^k+\cdots+x_\ell^k.$$

The case for squares, k = 2, was answered by Lagrange in 1770, who proved that every positive integer is the sum of at most four squares. The general case was proved by Hilbert in 1909, using algebraic techniques which gave no explicit bounds. An important breakthrough was the application of analytic tools to the problem by Hardy and Littlewood. These techniques are known as the circle method, and give explicit upper bounds for the function G(k), the smallest number of kth powers needed, such as Vinogradov's bound


 * $$G(k)\leq k(3\log k+11).$$

Diophantine problems
Diophantine problems are concerned with integer solutions to polynomial equations: one may study the distribution of solutions, that is, counting solutions according to some measure of "size" or height.

An important example is the Gauss circle problem, which asks for integers points (x y) which satisfy
 * $$x^2+y^2\leq r^2.$$

In geometrical terms, given a circle centered about the origin in the plane with radius r, the problem asks how many integer lattice points lie on or inside the circle. It is not hard to prove that the answer is $$\pi r^2 + E(r)$$, where $$E(r)/r^2 \to 0$$ as $$r \to \infty$$. Again, the difficult part and a great achievement of analytic number theory is obtaining specific upper bounds on the error term E(r).

It was shown by Gauss that $$ E(r) = O(r)$$. In general, an O(r) error term would be possible with the unit circle (or, more properly, the closed unit disk) replaced by the dilates of any bounded planar region with piecewise smooth boundary. Furthermore, replacing the unit circle by the unit square, the error term for the general problem can be as large as a linear function of r. Therefore, getting an error bound of the form $$O(r^{\delta})$$ for some $$\delta < 1$$ in the case of the circle is a significant improvement. The first to attain this was Sierpiński in 1906, who showed $$ E(r) = O(r^{2/3})$$. In 1915, Hardy and Landau each showed that one does not have $$E(r) = O(r^{1/2})$$. Since then the goal has been to show that for each fixed $$\epsilon > 0$$ there exists a real number $$C(\epsilon)$$ such that $$E(r) \leq C(\epsilon) r^{1/2 + \epsilon}$$.

In 2000 Huxley showed that $$E(r) = O(r^{131/208})$$, which is the best published result.

Dirichlet series
One of the most useful tools in multiplicative number theory are Dirichlet series, which are functions of a complex variable defined by an infinite series of the form


 * $$f(s)=\sum_{n=1}^\infty a_nn^{-s}.$$

Depending on the choice of coefficients $$a_n$$, this series may converge everywhere, nowhere, or on some half plane. In many cases, even where the series does not converge everywhere, the holomorphic function it defines may be analytically continued to a meromorphic function on the entire complex plane. The utility of functions like this in multiplicative problems can be seen in the formal identity


 * $$\left(\sum_{n=1}^\infty a_nn^{-s}\right)\left(\sum_{n=1}^\infty b_nn^{-s}\right)=\sum_{n=1}^\infty\left(\sum_{k\ell=n}a_kb_\ell\right)n^{-s};$$

hence the coefficients of the product of two Dirichlet series are the multiplicative convolutions of the original coefficients. Furthermore, techniques such as partial summation and Tauberian theorems can be used to get information about the coefficients from analytic information about the Dirichlet series. Thus a common method for estimating a multiplicative function is to express it as a Dirichlet series (or a product of simpler Dirichlet series using convolution identities), examine this series as a complex function and then convert this analytic information back into information about the original function.

Riemann zeta function
Euler showed that the fundamental theorem of arithmetic implies (at least formally) the Euler product
 * $$ \sum_{n=1}^\infty \frac {1}{n^s} = \prod_p^\infty \frac {1}{1-p^{-s}}\text{ for }s > 1$$

where the product is taken over all prime numbers p.

Euler's proof of the infinity of prime numbers makes use of the divergence of the term at the left hand side for s = 1 (the so-called harmonic series), a purely analytic result. Euler was also the first to use analytical arguments for the purpose of studying properties of integers, specifically by constructing generating power series. This was the beginning of analytic number theory.

Later, Riemann considered this function for complex values of s and showed that this function can be extended to a meromorphic function on the entire plane with a simple pole at s = 1. This function is now known as the Riemann Zeta function and is denoted by ζ(s). There is a plethora of literature on this function and the function is a special case of the more general Dirichlet L-functions.

Analytic number theorists are often interested in the error of approximations such as the prime number theorem. In this case, the error is smaller than x/log x. Riemann's formula for π(x) shows that the error term in this approximation can be expressed in terms of the zeros of the zeta function. In his 1859 paper, Riemann conjectured that all the "non-trivial" zeros of ζ lie on the line $$ \Re(s) = 1/2 $$ but never provided a proof of this statement. This famous and long-standing conjecture is known as the Riemann Hypothesis and has many deep implications in number theory; in fact, many important theorems have been proved under the assumption that the hypothesis is true. For example, under the assumption of the Riemann Hypothesis, the error term in the prime number theorem is $ O(x^{1/2+\varepsilon})$.

In the early 20th century G. H. Hardy and Littlewood proved many results about the zeta function in an attempt to prove the Riemann Hypothesis. In fact, in 1914, Hardy proved that there were infinitely many zeros of the zeta function on the critical line
 * $$ \Re(z) = 1/2. $$

This led to several theorems describing the density of the zeros on the critical line.