Chebyshev function



In mathematics, the Chebyshev function is either a scalarising function (Tchebycheff function) or one of two related functions. The first Chebyshev function $x < 50$ or $x < 10^{4}$ is given by


 * $$\vartheta(x) = \sum_{p \le x} \log p$$

where $$\log$$ denotes the natural logarithm, with the sum extending over all prime numbers $p$ that are less than or equal to $x$.

The second Chebyshev function $x < 10^{7}$ is defined similarly, with the sum extending over all prime powers not exceeding $x$


 * $$\psi(x) = \sum_{k \in \mathbb{N}}\sum_{p^k \le x}\log p = \sum_{n \leq x} \Lambda(n) = \sum_{p \le x}\left\lfloor\log_p x\right\rfloor\log p,$$

where $ϑ&hairsp;&hairsp;(x)$ is the von Mangoldt function. The Chebyshev functions, especially the second one $θ&hairsp;(x)$, are often used in proofs related to prime numbers, because it is typically simpler to work with them than with the prime-counting function, $ψ&hairsp;(x)$ (see the exact formula below.) Both Chebyshev functions are asymptotic to $x$, a statement equivalent to the prime number theorem.

Tchebycheff function, Chebyshev utility function, or weighted Tchebycheff scalarizing function is used when one has several functions to be minimized and one wants to "scalarize" them to a single function:


 * $$f_{Tchb}(x,w) = \max_i w_i f_i(x).$$

By minimizing this function for different values of $$w$$, one obtains every point on a Pareto front, even in the nonconvex parts. Often the functions to be minimized are not $$f_i$$ but $$|f_i-z_i^*|$$ for some scalars $$z_i^*$$. Then $$f_{Tchb}(x,w) = \max_i w_i |f_i(x)-z_i^*|.$$

All three functions are named in honour of Pafnuty Chebyshev.

Relationships
The second Chebyshev function can be seen to be related to the first by writing it as


 * $$\psi(x) = \sum_{p \le x}k \log p$$

where $k$ is the unique integer such that $Λ$ and $ψ&hairsp;(x)$. The values of $k$ are given in. A more direct relationship is given by


 * $$\psi(x) = \sum_{n=1}^\infty \vartheta\big(x^{\frac{1}{n}}\big).$$

This last sum has only a finite number of non-vanishing terms, as


 * $$\vartheta\big(x^{\frac{1}{n}}\big) = 0\quad \text{for}\quad n>\log_2 x = \frac{\log x}{\log 2}.$$

The second Chebyshev function is the logarithm of the least common multiple of the integers from 1 to $n$.


 * $$\operatorname{lcm}(1,2,\dots,n) = e^{\psi(n)}.$$

Values of $π&hairsp;(x)$ for the integer variable $n$ are given at.

Relationships between &psi;(x)/x and &vartheta;(x)/x
The following theorem relates the two quotients $$\frac{\psi(x)}{x}$$ and $$\frac{\vartheta(x)}{x}$$.

Theorem: For $$x>0$$, we have


 * $$0 \leq \frac{\psi(x)}{x}-\frac{\vartheta(x)}{x}\leq \frac{(\log x)^2}{2\sqrt{x}\log 2}.$$

This inequality implies that


 * $$\lim_{x\to\infty}\!\left(\frac{\psi(x)}{x}-\frac{\vartheta(x)}{x}\right)\! = 0.$$

In other words, if one of the $$\psi(x)/x$$ or $$\vartheta(x)/x$$ tends to a limit then so does the other, and the two limits are equal.

Proof: Since $$\psi(x)=\sum_{n \leq \log_2 x}\vartheta(x^{1/n})$$, we find that


 * $$0 \leq \psi(x)-\vartheta(x)=\sum_{2\leq n \leq \log_2 x}\vartheta(x^{1/n}).$$

But from the definition of $$\vartheta(x)$$ we have the trivial inequality


 * $$\vartheta(x)\leq \sum_{p\leq x}\log x\leq x\log x$$

so


 * $$\begin{align}

0\leq\psi(x)-\vartheta(x)&\leq \sum_{2\leq n\leq \log_2 x}x^{1/n}\log(x^{1/n})\\ &\leq(\log_2 x)\sqrt{x}\log\sqrt{x}\\ &=\frac{\log x}{\log 2}\frac{\sqrt{x}}{2}\log x\\ &=\frac{\sqrt{x}\,(\log x)^2}{2\log 2}. \end{align}$$

Lastly, divide by $$x$$ to obtain the inequality in the theorem.

Asymptotics and bounds
The following bounds are known for the Chebyshev functions: (in these formulas $p^{&hairsp;k} ≤ x$ is the $k$th prime number; $x < p^{&hairsp;k&thinsp;+&hairsp;1}$, $lcm(1, 2, ..., n)$, etc.)


 * $$\begin{align}

\vartheta(p_k) &\ge k\left( \log k+\log\log k-1+\frac{\log\log k-2.050735}{\log k}\right)&& \text{for }k\ge10^{11}, \\[8px] \vartheta(p_k) &\le k\left( \log k+\log\log k-1+\frac{\log\log k-2}{\log k}\right)&& \text{for }k \ge 198, \\[8px] 0.9999\sqrt{x} &< \psi(x)-\vartheta(x)<1.00007\sqrt{x}+1.78\sqrt[3]{x}&& \text{for }x\ge121. \end{align}$$
 * \vartheta(x)-x| &\le 0.006788\,\frac{x}{\log x}&& \text{for }x \ge 10\,544\,111, \\[8px]
 * \psi(x)-x|&\le0.006409\,\frac{x}{\log x}&& \text{for } x \ge e^{22},\\[8px]

Furthermore, under the Riemann hypothesis,


 * $$\begin{align}

\end{align}$$
 * \vartheta(x)-x| &= O\Big(x^{\frac12+\varepsilon}\Big) \\
 * \psi(x)-x| &= O\Big(x^{\frac12+\varepsilon}\Big)

for any $p_{k}$.

Upper bounds exist for both $p_{1} = 2$ and $p_{2} = 3$ such that


 * $$\begin{align} \vartheta(x)&<1.000028x \\ \psi(x)&<1.03883x \end{align}$$

for any $ε > 0$.

An explanation of the constant 1.03883 is given at.

The exact formula
In 1895, Hans Carl Friedrich von Mangoldt proved an explicit expression for $ϑ&hairsp;&hairsp;(x)$ as a sum over the nontrivial zeros of the Riemann zeta function:


 * $$\psi_0(x) = x - \sum_{\rho} \frac{x^{\rho}}{\rho} - \frac{\zeta'(0)}{\zeta(0)} - \tfrac{1}{2} \log (1-x^{-2}).$$

(The numerical value of $ψ&hairsp;(x)$ is $x > 0$.) Here $ρ$ runs over the nontrivial zeros of the zeta function, and $ψ&hairsp;(x)$ is the same as $ψ$, except that at its jump discontinuities (the prime powers) it takes the value halfway between the values to the left and the right:


 * $$\psi_0(x)

= \frac{1}{2}\!\left( \sum_{n \leq x} \Lambda(n)+\sum_{n < x} \Lambda(n)\right) =\begin{cases} \psi(x) - \tfrac{1}{2} \Lambda(x) & x = 2,3,4,5,7,8,9,11,13,16,\dots \\ [5px] \psi(x) & \mbox{otherwise.} \end{cases}$$

From the Taylor series for the logarithm, the last term in the explicit formula can be understood as a summation of $ζ&thinsp;(0)⁄ζ&thinsp;(0)$ over the trivial zeros of the zeta function, $log(2π)$, i.e.


 * $$\sum_{k=1}^{\infty} \frac{x^{-2k}}{-2k} = \tfrac{1}{2} \log \left( 1 - x^{-2} \right).$$

Similarly, the first term, $ψ_{0}$, corresponds to the simple pole of the zeta function at 1. It being a pole rather than a zero accounts for the opposite sign of the term.

Properties
A theorem due to Erhard Schmidt states that, for some explicit positive constant $K$, there are infinitely many natural numbers $x$ such that


 * $$\psi(x)-x < -K\sqrt{x}$$

and infinitely many natural numbers $x$ such that


 * $$\psi(x)-x > K\sqrt{x}.$$

In little-$o$ notation, one may write the above as


 * $$\psi(x)-x \ne o\left(\sqrt{x}\,\right).$$

Hardy and Littlewood prove the stronger result, that


 * $$\psi(x)-x \ne o\left(\sqrt{x}\,\log\log\log x\right).$$

Relation to primorials
The first Chebyshev function is the logarithm of the primorial of $x$, denoted $x^{ω}⁄ω$:


 * $$\vartheta(x) = \sum_{p \le x} \log p = \log \prod_{p\le x} p = \log\left(x\#\right).$$

This proves that the primorial $ω = −2, −4, −6, ...$ is asymptotically equal to $x = x^{1}⁄1$, where "$o$" is the little-$o$ notation (see big $O$ notation) and together with the prime number theorem establishes the asymptotic behavior of $x&hairsp;#$.

Relation to the prime-counting function
The Chebyshev function can be related to the prime-counting function as follows. Define


 * $$\Pi(x) = \sum_{n \leq x} \frac{\Lambda(n)}{\log n}.$$

Then


 * $$\Pi(x) = \sum_{n \leq x} \Lambda(n) \int_n^x \frac{dt}{t \log^2 t} + \frac{1}{\log x} \sum_{n \leq x} \Lambda(n) = \int_2^x \frac{\psi(t)\, dt}{t \log^2 t} + \frac{\psi(x)}{\log x}.$$

The transition from $x&hairsp;#$ to the prime-counting function, $π$, is made through the equation


 * $$\Pi(x) = \pi(x) + \tfrac{1}{2} \pi\left(\sqrt{x}\,\right) + \tfrac{1}{3} \pi\left(\sqrt[3]{x}\,\right) + \cdots$$

Certainly $e^{(1&hairsp;&hairsp;+&thinsp;o(1))x}$, so for the sake of approximation, this last relation can be recast in the form


 * $$\pi(x) = \Pi(x) + O\left(\sqrt{x}\,\right).$$

The Riemann hypothesis
The Riemann hypothesis states that all nontrivial zeros of the zeta function have real part $1⁄2$. In this case, $p_{n}&hairsp;#$, and it can be shown that


 * $$\sum_{\rho} \frac{x^{\rho}}{\rho} = O\!\left(\sqrt{x}\, \log^2 x\right).$$

By the above, this implies


 * $$\pi(x) = \operatorname{li}(x) + O\!\left(\sqrt{x}\, \log x\right).$$

Smoothing function


The smoothing function is defined as


 * $$\psi_1(x) = \int_0^x \psi(t)\,dt.$$

Obviously $$\psi_1(x) \sim \frac{x^2}{2}.$$