Coupon collector's problem

In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as $$\Theta(n\log(n))$$. For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.

Via generating functions
By definition of Stirling numbers of the second kind, the probability that exactly T draws are needed is$$\frac{S(T-1, n-1)n!}{n^T}$$By manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of T:$$f_k(x) := \sum_T S(T, k) x^T = \prod_{r=1}^k \frac{x}{1-rx}$$In general, the k-th moment is $$(n-1)! ((D_x x)^kf_{n-1}(x)) \Big|_{x=1/n}$$, where $$D_x$$ is the derivative operator $$d/dx$$. For example, the 0th moment is$$\sum_T \frac{S(T-1, n-1)n!}{n^T} = (n-1)! f_{n-1}(1/n) = (n-1)! \times \prod_{r=1}^{n-1} \frac{1/n}{1-r/n} = 1 $$and the 1st moment is $$(n-1)! (D_x xf_{n-1}(x)) \Big|_{x=1/n}$$, which can be explicitly evaluated to $$nH_n$$, etc.

Calculating the expectation
Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. Then $$T=t_1 + \cdots + t_n$$. Think of T and ti as random variables. Observe that the probability of collecting a coupon is $$p_i = \frac{n - (i - 1)}{n} = \frac{n - i + 1}{n}$$. Therefore, $$t_i$$ has geometric distribution with expectation $$\frac{1}{p_i} = \frac{n}{n - i + 1}$$. By the linearity of expectations we have:



\begin{align} \operatorname{E}(T) & {}= \operatorname{E}(t_1 + t_2 + \cdots + t_n) \\ & {}= \operatorname{E}(t_1) + \operatorname{E}(t_2) + \cdots + \operatorname{E}(t_n) \\ & {}= \frac{1}{p_1} + \frac{1}{p_2} + \cdots + \frac{1}{p_n} \\ & {}= \frac{n}{n} + \frac{n}{n-1} + \cdots + \frac{n}{1} \\ & {}= n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n}\right) \\ & {}= n \cdot H_n. \end{align} $$

Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:



\operatorname{E}(T) = n \cdot H_n = n \log n + \gamma n + \frac{1}{2} + O(1/n), $$ where $$\gamma \approx 0.5772156649$$ is the Euler–Mascheroni constant.

Using the Markov inequality to bound the desired probability:


 * $$\operatorname{P}(T \geq cn H_n) \le \frac{1}{c}.$$

The above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:



\begin{align} \operatorname{E}(T_k) & {}= \operatorname{E}(t_{k+1} + t_{k+2} + \cdots + t_n) \\ & {}= n \cdot \left(\frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n-k}\right) \\ & {}= n \cdot H_{n-k} \end{align} $$

And when $$k=0$$ then we get the original result.

Calculating the variance
Using the independence of random variables ti, we obtain:



\begin{align} \operatorname{Var}(T)& {}= \operatorname{Var}(t_1 + \cdots + t_n) \\ & {} = \operatorname{Var}(t_1) + \operatorname{Var}(t_2) + \cdots + \operatorname{Var}(t_n) \\ & {} = \frac{1-p_1}{p_1^2} + \frac{1-p_2}{p_2^2} + \cdots + \frac{1-p_n}{p_n^2} \\ & {} < \left(\frac{n^2}{n^2} + \frac{n^2}{(n-1)^2} + \cdots + \frac{n^2}{1^2}\right) \\ & {} = n^2 \cdot \left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{n^2} \right) \\ & {} < \frac{\pi^2}{6} n^2 \end{align} $$

since $$\frac{\pi^2}6=\frac{1}{1^2}+\frac{1}{2^2}+\cdots+\frac{1}{n^2}+\cdots$$ (see Basel problem).

Bound the desired probability using the Chebyshev inequality:


 * $$\operatorname{P}\left(|T- n H_n| \geq cn\right) \le \frac{\pi^2}{6c^2}.$$

Tail estimates
A stronger tail estimate for the upper tail be obtained as follows. Let $${Z}_i^r$$ denote the event that the $$i$$-th coupon was not picked in the first $$r$$ trials. Then

\begin{align} P\left [ {Z}_i^r \right ] = \left(1-\frac{1}{n}\right)^r \le e^{-r / n}. \end{align} $$

Thus, for $$r = \beta n \log n$$, we have $$P\left [ {Z}_i^r \right ] \le e^{(-\beta n \log n ) / n} = n^{-\beta}$$. Via a union bound over the $$n$$ coupons, we obtain

\begin{align} P\left [ T > \beta n \log n \right ] = P \left [ 	\bigcup_i {Z}_i^{\beta n \log n} \right ] \le n \cdot P [ {Z}_1^{\beta n \log n} ] \le n^{-\beta + 1}. \end{align} $$

Extensions and generalizations

 * Pierre-Simon Laplace, but also Paul Erdős and Alfréd Rényi, proved the limit theorem for the distribution of T. This result is a further extension of previous bounds. A proof is found in.


 * $$\operatorname{P}(T < n\log n + cn) \to e^{-e^{-c}}, \text{ as } n \to \infty.$$ which is a Gumbel distribution. A simple proof by martingales is in the next section.


 * Donald J. Newman and Lawrence Shepp gave a generalization of the coupon collector's problem when m copies of each coupon need to be collected. Let Tm be the first time m copies of each coupon are collected. They showed that the expectation in this case satisfies:


 * $$\operatorname{E}(T_m) = n \log n + (m-1) n \log\log n + O(n), \text{ as } n \to \infty.$$
 * Here m is fixed. When m = 1 we get the earlier formula for the expectation.


 * Common generalization, also due to Erdős and Rényi:


 * $$\operatorname{P}\left(T_m < n\log n + (m-1) n \log\log n + cn\right) \to e^{-e^{-c}/(m-1)!}, \text{ as } n \to \infty.$$


 * In the general case of a nonuniform probability distribution, according to Philippe Flajolet et al.
 * $$\operatorname{E}(T)=\int_0^\infty \left(1 - \prod_{i=1}^m \left(1-e^{-p_it}\right)\right)dt. $$


 * This is equal to


 * $$\operatorname{E}(T)=\sum_{q=0}^{m-1} (-1)^{m-1-q} \sum_{|J|=q} \frac{1}{1-P_J},$$


 * where m denotes the number of coupons to be collected and PJ denotes the probability of getting any coupon in the set of coupons J.

Martingales
This section is based on.

Define a discrete random process $$N(0), N(1), \dots$$ by letting $$N(t)$$ be the number of coupons not yet seen after $$t$$ draws. The random process is just a sequence generated by a Markov chain with states $$n, n-1, \dots, 1, 0$$, and transition probabilities$$p_{i \to i-1} = i/n, \quad p_{i \to i} = 1-i/n$$Now define $$M(t) := N(t) \left(\frac{n}{n-1}\right)^t$$then it is a martingale, since$$E[M(t+1)|M(t)] = (n/(n-1))^{t+1} E[N(t+1)|N(t)] = (n/(n-1))^{t+1} (N(t) - N(t)/n)= M(t) $$Consequently, we have $$E[N(t)] = n(1-1/n)^t$$. In particular, we have a limit law $$\lim_{n \to \infty} E[N(n\ln n + cn)] = e^{-c} $$ for any $$c > 0$$. This suggests to us a limit law for $$T$$.

More generally, each $$\left(\frac{n}{n-k}\right)^t N(t) \cdots (N(t) - k+1)$$ is a martingale process, which allows us to calculate all moments of $$N(t)$$. For example, $$E [N(t)^2] = n(n-1)\left(\frac{n-2}{n}\right)^t + n\left(\frac{n-1}{n}\right)^t, \quad n \geq 2$$giving another limit law $$\lim_{n \to \infty} Var[N(n\ln n + cn)] = e^{-c} $$. More generally, $$\lim_{n \to \infty} E[N(n\ln n + cn) \cdots (N(n\ln n + cn) - k+1 )] = e^{-kc} $$meaning that $$N(n \ln n + cn)$$ has all moments converging to constants, so it converges to some probability distribution on $$0, 1, 2, \dots$$.

Let $$N$$ be the random variable with the limit distribution. We have$$\begin{aligned} E[1] &= 1 \\ E[N] &= e^{-c} \\ E[N(N-1)] &= e^{-2c} \\ E[N(N-1)(N-2)] &= e^{-3c} \\ & \vdots \end{aligned} $$By introducing a new variable $$t$$, we can sum up both sides explicitly:$$E[1 + Nt/1! + N(N-1)t^2/2! + \cdots ] = 1 + e^{-c}t/1! + e^{-2c}t^2/2! + \cdots $$giving $$E[(1+t)^N] = e^{e^{-c}t} $$.

At the $$t \to 0$$ limit, we have $$Pr(N = 0) = e^{-e^{-c}} $$, which is precisely what the limit law states.

By taking the derivative $$d/dt$$ multiple times, we find that $$Pr(N=k) = \frac{e^{-kc}}{k!} e^{-e^{-c}}$$, which is a Poisson distribution.