User:Amyqz/sandbox

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments. Each experiment either successes with probability $$p$$ or fails with probability $$q=1-p$$. For a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used.

Probability mass function
In general, if the random variable X follows the binomial distribution with parameters n ∈ ℕ and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:


 * $$f(k)=Pr(k;n,p) = \Pr(X = k) = {n\choose k}p^k(1-p)^{n-k}$$

for k = 0, 1, 2, ..., n, where


 * $$\binom n k =\frac{n!}{k!(n-k)!}$$

is the binomial coefficient, hence the name of the distribution.

The formula can be understood as follows: Because the trials are independent, k successes occur with probability pk and n − k failures occur with probability (1 − p)n − k. However, the k successes can occur anywhere among the n trials, and there are $${n\choose k}$$different ways of distributing k successes in a sequence of n trials.
 * Note that
 * $$f(k,n,p)=f(n-k,n,1-p) $$

Intuitively, this is true because the event of "k successes in n trials" is the same as the event "n-k failures in n trials". Mathmatically, $$f(k, n, p)= {n\choose k}p^k(1-p)^{n-k} = {n\choose n-k} p^k(1-p)^{n-k}= f(k-1, n, 1-p)$$
 * The probability mass function satisfies the following recurrence relation, for every $$n,p$$:
 * $$\left\{\begin{array}{l}

p (n-k) f(k,n,p) = (k+1) (1-p) f(k+1,n,p), \\[10pt] f(0,n,p)=(1-p)^n \end{array}\right\}$$
 * because

$$ \frac{f(k+1,n,p)}{f(k,n,p)}=\frac{{n\choose k+1}p^{k+1}(1-p)^{n-k-1}}{{n \choose k}p^{k}(1-p)^{n-k}}=\frac{(n-k)p}{(k+1)(1-p)} $$$$ p (n-k) f(k,n,p) = (k+1) (1-p) f(k+1,n,p) $$

Cumulative distribution function
The cumulative distribution function can be expressed as:


 * $$F(k;n,p) = \Pr(X \le k) = \sum_{i=0}^{\lfloor k \rfloor} {n\choose i}p^i(1-p)^{n-i}$$

where $$\scriptstyle \lfloor k\rfloor\,$$ is the "floor" under k, i.e. the greatest integer less than or equal to k.

An Example
Suppose a biased coin comes up heads with probability 0.3 when tossed.

1) What is the probability of achieving 0, 1, 2 heads after six tosses?

2) What is the probability of achieving less than or equal to 2 heads?

Answer: 1)

The distribution of this process is a binomial distribution with n=6 and p=0.3.


 * $$\Pr(0\text{ heads})=f(0)= \Pr(X = 0) = {6\choose 0}0.3^0 (1-0.3)^{6-0}= 0.117649$$
 * $$\Pr(1\text{ heads}) = f(1) = \Pr(X = 1) = {6\choose 1}0.3^1 (1-0.3)^{6-1}= 0.302526$$
 * $$\Pr(2\text{ heads}) = f(2) = \Pr(X = 2) = {6\choose 2}0.3^2 (1-0.3)^{6-2}= 0.324135$$
 * 2)
 * $$\Pr(\text{less than }2 \text{ heads}) = F(2) = \Pr(X \leq 2) = {6\choose 0}0.3^0 (1-0.3)^{6-0}+{6\choose 1}0.3^1 (1-0.3)^{6-1}+{6\choose 2}0.3^2(1-0.3)^{6-2}=0.74431$$

Mean
If X ~ B (n, p), that is, X is a binomially distributed random variable with n being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of X is:


 * $$ \operatorname{E}[X] = np.$$

For example, if n = 100, and p =1/4, then the average number of successful results will be 25.

Proof: We calculate the mean, μ, directly calculated from its definition


 * $$\mu=\sum_{i=0}^n x_i p_i,$$

and the binomial theorem:


 * $$\begin{align}

\mu &= \sum_{k=0}^n k\binom nk p^k (1-p)^{n-k}\\ &= np\sum_{k=0}^n k\frac{(n-1)!}{(n-k)!k!}p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &= np\sum_{k=1}^n \frac{(n-1)!}{((n-1)-(k-1))!(k-1)!}p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &= np\sum_{k=1}^n \binom{n-1}{k-1} p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &= np\sum_{\ell=0}^{n-1} \binom{n-1}\ell p^\ell (1-p)^{(n-1)-\ell} && \text{with } \ell:=k-1\\ &= np\sum_{\ell=0}^m \binom m\ell p^\ell (1-p)^{m-\ell} && \text{with } m:=n-1\\ &= np(p+(1-p))^m \\ &=np \end{align}$$

It is also possible to deduce the mean from the equation $$X=X_1+\cdots+X_n$$. $$X_i$$ equals 1 if the i-th trial succeeds, and 0 if the trial fails. $$X_i$$ 's are Bernoulli distributed random variables with $$E[X_i]=p$$. We get


 * $$E[X] = E[X_1+\cdots+X_n] = E[X_1]+\cdots+E[X_n] = \underbrace{p+\cdots+p}_{n\text{ times}} = np$$

Variance
The variance is:
 * $$ \operatorname{Var}(X) = np(1 - p).$$

Proof: Let $$X=X_1+\cdots+X_n$$ where all $$X_i$$ are independently Bernoulli distributed random variables. Since $$\operatorname{Var}(X_i) = p(1-p)$$, we get:


 * $$\operatorname{Var}(X)=\operatorname{Var}(X_1+\cdots+X_n)$$
 * $$=\operatorname{Var}(X_1)+\cdots+\operatorname{Var}(X_n) \qquad \text{because } X_i \text{'s are independent}$$
 * $$=n \operatorname{Var}(X_1)=np(1-p).$$

Mode
Usually the mode of a binomial B(n, p) distribution is equal to $$\lfloor (n+1)p\rfloor$$, where $$\lfloor\cdot\rfloor$$ is the floor function. However, when (n + 1)p is an integer and p is neither 0 nor 1, then the distribution has two modes: (n + 1)p and (n + 1)p − 1. When p is equal to 0 or 1, the mode will be 0 and n correspondingly. These cases can be summarized as follows:
 * $$\text{mode} =

\begin{cases} \lfloor (n+1)\,p\rfloor & \text{if }(n+1)p\text{ is 0 or a noninteger}, \\ (n+1)\,p\ \text{ and }\ (n+1)\,p - 1 &\text{if }(n+1)p\in\{1,\dots,n\}, \\ n & \text{if }(n+1)p = n + 1. \end{cases}$$

Proof: Let


 * $$f(k)=\binom nk p^k q^{n-k}.$$

For $$p=0$$ only $$f(0)$$ has a nonzero value with $$f(0)=1$$. For $$p=1$$ we find $$f(n)=1$$ and $$f(k)=0$$ for $$k\neq n$$. This proves that the mode is 0 for $$p=0$$ and $$n$$ for $$p=1$$.

Let $$0 < p < 1$$. We find


 * $$\frac{f(k+1)}{f(k)} = \frac{(n-k)p}{(k+1)(1-p)}$$.

From this follows


 * $$\begin{align}

k > (n+1)p-1 \Rightarrow f(k+1) < f(k) \\ k = (n+1)p-1 \Rightarrow f(k+1) = f(k) \\ k < (n+1)p-1 \Rightarrow f(k+1) > f(k) \end{align}$$

So when $$(n+1)p-1$$ is an integer, then $$(n+1)p-1$$ and $$(n+1)p$$ is a mode. In the case that $$(n+1)p-1\notin \Z$$, then only $$\lfloor (n+1)p-1\rfloor+1=\lfloor (n+1)p\rfloor$$ is a mode.

Median
In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However several special results have been established:
 * If np is an integer, then the mean, median, and mode coincide and equal np.

== Any median m must lie within the interval ⌊np⌋ ≤ m ≤ ⌈np⌉. ==
 * A median m cannot lie too far away from the mean: &#124;m − np&#124; ≤ min{ ln 2, max{p, 1 − p} }.
 * The median is unique and equal to m = round(np) in cases when either p ≤ 1 − ln 2 or p ≥ ln 2 or |m − np| ≤ min{p, 1 − p} (except for the case when p = ½ and n is odd).
 * When p = 1/2 and n is odd, any number m in the interval ½(n − 1) ≤ m ≤ ½(n + 1) is a median of the binomial distribution. If p = 1/2 and n is even, then m = n/2 is the unique median.

Covariance between two binomials
If two binomially distributed random variables X and Y are observed together, estimating their covariance can be useful. The covariance is


 * $$\operatorname{Cov}(X, Y) = \operatorname{E}(XY) - \mu_X \mu_Y.$$

In the case n = 1 (the case of Bernoulli trials) XY is non-zero only when both X and Y are one, and μX and μY are equal to the two probabilities. Defining pB as the probability of both happening at the same time, this gives


 * $$\operatorname{Cov}(X, Y) = p_B - p_X p_Y,$$

and for n independent pairwise trials


 * $$\operatorname{Cov}(X, Y)_n = n ( p_B - p_X p_Y ).$$

If X and Y are the same variable, this reduces to the variance formula given above.

Probability Generating functions
The probability generating function of binomial distribution can be solved as the following:


 * $$G_X \left({s}\right) = \sum_{k \mathop \ge 0} p_X \left({k}\right) s^k$$

From the density function of the binomial distribution:
 * $$p_X(k)=\binom n k p^k \left({1 - p}\right)^{n-k}$$

So:

$$ G_X \left({s}\right) =\sum_{k \mathop = 0}^n \binom n k p^k \left({1 - p}\right)^{n - k} s^k =\sum_{k \mathop = 0}^n \binom n k \left({p s}\right)^k \left({1 - p}\right)^{n - k} =\left({\left({p s}\right) + \left({1 - p}\right)}\right)^n $$

The last step is a result of the binomial theorem.

Sums of binomials
If X ~ B(n, p) and Y ~ B(m, p) are independent binomial variables with the same probability p, then X + Y is again a binomial variable; its distribution is Z=X+Y ~ B(n+m, p):


 * $$\begin{align}

\operatorname P(Z=k) &= \sum_{i=0}^k\left[\binom{n}i p^i (1-p)^{n-i}\right]\left[\binom{m}{k-i} p^{k-i} (1-p)^{m-k+i}\right]\\ &= \binom{n+m}k p^k (1-p)^{n+m-k} \end{align}$$

However, if X and Y do not have the same probability p, then the variance of the sum will be smaller than the variance of a binomial variable distributed as $$B(n+m, \bar{p}).\,$$

Ratio of two binomial distributions
This result was first derived by Katz et al in 1978.

Let p1 and p2 be the probabilities of success in the binomial distributions B(X,n) and B(Y,m) respectively. Let T = (X/n)/(Y/m).

Then log(T) is approximately normally distributed with mean log(p1/p2) and variance (1/x) - (1/n) + (1/y) - (1/m).

Reciprocal of binomial distribution
No closed form for this distribution is known. An asymptotic approximation for the mean is known.

$$ E[ ( 1 + X )^a ] = O( ( np )^{ -a } ) + o( n^{ -a } ) $$

where E[] is the expectation operator, X is a random variable, O and o are the big and little o order functions, n is the sample size, p is the probability of success and a is a variable that may be positive or negative, integer or fractional.

Conditional binomials
If X ~ B(n, p) and, conditional on X, Y ~ B(X, q), then Y is a simple binomial variable with distribution Y ~ B(n, pq).

For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq).

[Proof] Since $$ X \sim B(n, p) $$ and $$ Y \sim B(X, q) $$, by the law of total probability,

\begin{align} \Pr[Y = m] &= \sum_{k = m}^{n} \Pr[Y = m \mid X = k] \Pr[X = k] \\[2pt] &= \sum_{k=m}^n \binom{n}{k} \binom{k}{m} p^k q^m (1-p)^{n-k} (1-q)^{k-m}\\ \end{align} $$ Since $$ \scriptstyle \binom{n}{k} \binom{k}{m} = \binom{n}{m} \binom{n-m}{k-m} $$, the equation above can be expressed as
 * $$ \Pr[Y = m] = \sum_{k=m}^{n} \binom{n}{m} \binom{n-m}{k-m} p^k q^m (1-p)^{n-k} (1-q)^{k-m} $$

Factoring $$ p^k = p^m p^{k-m} $$ and pulling all the terms that don't depend on $$ k $$ out of the sum now yields

\begin{align} \Pr[Y = m] &= \binom{n}{m} p^m q^m \left( \sum_{k=m}^{n} \binom{n-m}{k-m} p^{k-m} (1-p)^{n-k} (1-q)^{k-m} \right) \\[2pt] &= \binom{n}{m} (pq)^m \left( \sum_{k=m}^{n} \binom{n-m}{k-m} \left(p(1-q)\right)^{k-m} (1-p)^{n-k}  \right) \end{align} $$ After substituting $$ i = k - m $$ in the expression above, we get
 * $$ \Pr[Y = m] = \binom{n}{m} (pq)^m \left( \sum_{i=0}^{n-m} \binom{n-m}{i} (p - pq)^i (1-p)^{n-m - i} \right) $$

Notice that the sum (in the parentheses) above equals $$ (p - pq + 1 - p)^{n-m} $$ by the binomial theorem. Substituting this in finally yields

\begin{align} \Pr[Y=m] &= \binom{n}{m} (pq)^m (p - pq + 1 - p)^{n-m}\\[4pt] &= \binom{n}{m} (pq)^m (1-pq)^{n-m} \end{align} $$ and thus $$ Y \sim B(n, pq) $$ as desired.

Bernoulli distribution
The Bernoulli distribution is a special case of the binomial distribution, where n = 1. Symbolically, X ~ B(1, p) has the same meaning as X ~ B(p). Conversely, any binomial distribution, B(n, p), is the distribution of the sum of n Bernoulli trials, B(p), each with the same probability p.

Poisson binomial distribution
The binomial distribution is a special case of the Poisson binomial distribution, or general binomial distribution, which is the distribution of a sum of n independent non-identical Bernoulli trials B(pi).

Normal approximation


If n is large enough, the skew of the distribution is not too great. In this case a reasonable approximation to B(n, p) is given by the normal distribution


 * $$ \mathcal{N}(np,\,\sqrt{np(1-p)}),$$

and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as n increases (at least 20) and is better when p is not near to 0 or 1. Various rules of thumb may be used to decide whether n is large enough, and p is far enough from the extremes of zero or one:


 * One rule is that for n > 5 the normal approximation is adequate if the absolute value of the skewness is strictly less than 1/3; that is, if
 * $$\frac{|1-2p|}{\sqrt{np(1-p)}}=\frac1{\sqrt{n}}\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<\frac13.$$


 * A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if
 * $$\mu\pm3\sigma=np\pm3\sqrt{np(1-p)}\in(0,n).$$
 * This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.
 * $$n>9\,\frac{1-p}p\quad\hbox{and}\quad n>9\,\frac{p}{1-p}.$$

[Proof] The rule $$ np\pm3\sqrt{np(1-p)}\in(0,n)$$ is totally equivalent to request that
 * $$np-3\sqrt{np(1-p)}>0\quad\hbox{and}\quad np+3\sqrt{np(1-p)}3\sqrt{np(1-p)}\quad\hbox{and}\quad n(1-p)>3\sqrt{np(1-p)}.$$

Since $$09\,\frac{1-p}p\quad\hbox{and}\quad n>9\,\frac{p}{1-p}.$$

Notice that these conditions automatically imply that $$n>9$$. On the other hand, apply again the square root and divide by 3,
 * $$\frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}>0\quad\hbox{and}\quad\frac{\sqrt{n}}3>\sqrt{\frac{p}{1-p}}>0.$$

Subtracting the second set of inequalities from the first one yields:
 * $$\frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}>-\frac{\sqrt{n}}3;$$

and so, the desired first rule is satisfied,
 * $$\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<\frac{\sqrt{n}}3.$$

[Proof] Assume that both values $$np$$ and $$n(1-p)$$ are greater than 9. Since $$0< p<1$$, we easily have that
 * Another commonly used rule is that both values $$np$$ and $$n(1-p)$$ must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs.
 * $$np\geq9>9(1-p)\quad\hbox{and}\quad n(1-p)\geq9>9p.$$

We only have to divide now by the respective factors $$p$$ and $$1-p$$, to deduce the alternative form of the 3-standard-deviation rule:
 * $$n>9\,\frac{1-p}p\quad\hbox{and}\quad n>9\,\frac{p}{1-p}.$$

The following is an example of applying a continuity correction. Suppose one wishes to calculate Pr(X ≤ 8) for a binomial random variable X. If Y has a distribution given by the normal approximation, then Pr(X ≤ 8) is approximated by Pr(Y ≤ 8.5). The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.

This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large n are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since B(n, p) is a sum of n independent, identically distributed Bernoulli variables with parameter p. This fact is the basis of a hypothesis test, a "proportion z-test", for the value of p using x/n, the sample proportion and estimator of p, in a common test statistic.

For example, suppose one randomly samples n people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation $$\sigma = \sqrt{\frac{p(1-p)}{n}}$$

Poisson approximation
The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed or at least p tends to zero. Therefore, the Poisson distribution with parameter λ = np can be used as an approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to two rules of thumb, this approximation is good if n ≥ 20 and p ≤ 0.05, or if n ≥ 100 and np ≤ 10.

Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein.

Limiting distributions

 * Poisson limit theorem: As n approaches ∞ and p approaches 0, then the Binomial(n, p) distribution approaches the Poisson distribution with expected value λ = np.
 * de Moivre–Laplace theorem: As n approaches ∞ while p remains fixed, the distribution of


 * $$\frac{X-np}{\sqrt{np(1-p)}}$$


 * approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of X is asymptotically normal with expected value np and variance np(1 − p). This result is a specific case of the central limit theorem.

Beta distribution
Beta distributions provide a family of prior probability distributions for binomial distributions in Bayesian inference:
 * $$P(p;\alpha,\beta) = \frac{p^{\alpha-1}(1-p)^{\beta-1}}{\mathrm{B}(\alpha,\beta)}$$.

Confidence intervals
Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed.

In the equations for confidence intervals below, the variables have the following meaning:


 * n1 is the number of successes out of n, the total number of trials
 * $$ \hat{p} = \frac{n_1}{n}$$ is the proportion of successes
 * $$z$$ is the $$1 - \tfrac{1}{2}\alpha$$ quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate $$\alpha$$. For example, for a 95% confidence level the error $$\alpha$$ = 0.05, so $$1 - \tfrac{1}{2}\alpha$$ = 0.975 and $$z$$ = 1.96.

Wald method

 * $$ \hat{p} \pm z \sqrt{ \frac{ \hat{p} ( 1 -\hat{p} )}{ n } } .$$


 * A continuity correction of 0.5/n may be added.

=== Agresti-Coull method ===


 * $$ \tilde{p} \pm z \sqrt{ \frac{ \tilde{p} ( 1 - \tilde{p} )}{ n + z^2 } } .$$


 * Here the estimate of p is modified to


 * $$ \tilde{p}= \frac{ n_1 + \frac{1}{2} z^2}{ n + z^2 } $$

=== ArcSine method ===


 * $$\sin^2 \left (\arcsin \left ( \sqrt{ \hat{p} } \right ) \pm \frac{ z }{ 2 \sqrt{ n } } \right ) $$

=== Wilson (score) method ===

The notation in the formula below differs from the previous formula's in two respects:


 * Firstly, zx has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the xth quantile of the standard normal distribution', rather than being a shorthand for 'the (1 - x)-th quantile'.


 * Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use $$z = z_{\alpha / 2}$$ to get the lower bound, or use $$z = z_{1 - \alpha/2}$$ to get the upper bound. For example: for a 95% confidence level the error $$\alpha$$ = 0.05, so one gets the lower bound by using $$z = z_{\alpha/2} = z_{0.025} = - 1.96$$, and one gets the upper bound by using $$z = z_{1 - \alpha/2} = z_{0.975} = 1.96$$.

$$\frac{ \hat{p} + \frac{z^2}{2n} + z   \sqrt{ \frac{\hat{p}(1 - \hat{p})}{n} + \frac{z^2}{4 n^2} } }{   1 + \frac{z^2}{n} }$$

Comparison
The exact (Clopper-Pearson) method is the most conservative.

The Wald method, although commonly recommended in textbooks, is the most biased.

Generating binomial random variates
Methods for random number generation where the marginal distribution is a binomial distribution are well-established.

One way to generate random samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that P(X=k) for all values k from 0 through n. (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples U[0,1] into discrete numbers by using the probabilities calculated in step one.

Tail bounds
For k ≤ np, upper bounds for the lower tail of the distribution function can be derived. Recall that $$F(k;n,p) = \Pr(X \le k)$$, the probability that there are at most k successes.

Hoeffding's inequality yields the bound


 * $$ F(k;n,p) \leq \exp\left(-2 \frac{(np-k)^2}{n}\right), \!$$

and Chernoff's inequality can be used to derive the bound


 * $$ F(k;n,p) \leq \exp\left(-\frac{1}{2\,p} \frac{(np-k)^2}{n}\right). \!$$

Moreover, these bounds are reasonably tight when p = 1/2, since the following expression holds for all k ≥ 3n/8


 * $$ F(k;n,\tfrac{1}{2}) \leq \frac{14}{15} \exp\left(- \frac{16 (\frac{n}{2} - k)^2}{n}\right). \!$$

However, the bounds do not work well for extreme values of p. In particular, as p $$\rightarrow$$ 1, value F(k;n,p) goes to zero (for fixed k, n with k<n) while the upper bound above goes to a positive constant. In this case a better bound is given by


 * $$ F(k;n,p) \leq \exp\left(-nD\left(\frac{k}{n}\left|\right|p\right)\right) \quad\quad\mbox{if }0<\frac{k}{n}<p\!$$

where D(a || p) is the relative entropy between an a-coin and a p-coin (i.e. between the Bernoulli(a) and Bernoulli(p) distribution):


 * $$ D(a||p)=(a)\log\frac{a}{p}+(1-a)\log\frac{1-a}{1-p}. \!$$

Asymptotically, this bound is reasonably tight; see for details. An equivalent formulation of the bound is


 * $$ \Pr(X \ge k) =F(n-k;n,1-p)\leq \exp\left(-nD\left(\frac{k}{n}\left|\right|p\right)\right) \quad\quad\mbox{if }p<\frac{k}{n}<1.\!$$

Both these bounds are derived directly from the Chernoff bound. It can also be shown that,


 * $$ \Pr(X \ge k) =F(n-k;n,1-p)\geq \frac{1}{(n+1)^2} \exp\left(-nD\left(\frac{k}{n}\left|\right|p\right)\right) \quad\quad\mbox{if }p<\frac{k}{n}<1.\!$$

This is proved using the method of types (see for example chapter 12 of Elements of Information Theory by Cover and Thomas ).

We can also change the $$(n+1)^2$$ in the denominator to $$\sqrt{2n}$$, by approximating the binomial coefficient with Stirlings formula.

History
This distribution was derived by James Bernoulli in 1713. He considered the case where p = r/(r+s) where p is the probability of success and r and s are positive integers. Blaise Pascal had earlier considered the case where p = 1/2.