User:Zaqrfv/draft


 * This page is a new version of Neyman-Pearson lemma, still under construction. Mainly, still needs citations, links etc. Comments specific to this rewrite can be added here, or generally related to NP lemma, here.

The Neyman-Pearson Lemma establishes that the likelihood ratio test is the most powerful procedure for simple-vs-simple statistical hypotheses testing.

The Neyman-Pearson lemma is not directly applicable to composite hypotheses. However, in certain special cases, the lemma provides the same optimal test for each member of a composite alternative, and in this case, the test is Uniformly most powerful.

Statement of the Lemma
In a simple-vs-simple (or point-vs-point) hypotheses testing problem, the distribution of observed data X is fully specified, with no unknown parameters, by both the null and alternative hypothesis. Here, X may represent a single random observation, a vector of observations, or (using measure-theoretic definitions of densities) any other type of random data.

Mathematically, let $$f(x)$$ be the density function of X, and suppose that one wishes to test the null hypothesis $$H_0: f=f_0$$ against the alternative hypothesis $$H_1: f=f_1$$. The Neyman-Pearson test compares the ratio of the two densities:

Accept $$H_0$$ if $$f_1(X) < c f_0(X)$$.

Reject $$H_0$$ if $$f_1(X) \ge c f_0(X)$$.

The critical value c is chosen so as to obtain a specified significance level $$\alpha$$.

Note that for discrete data, $$f_0$$ and $$f_1$$ are mass functions. In this case one has to be more careful about $$f_1(X) = c f_0(X)$$ - see the section on critical functions below.

The Neyman-Pearson Lemma states that, among all possible tests with significance level $$\alpha$$, the Neyman-Pearson (or Likelihood Ratio) test as defined above is the most powerful test. That is, it has the greatest probability of rejecting the null hypothesis, and accepting the alternative, when the alternative hypothesis is the correct hypothesis.

Neyman-Pearson Lemma in terms of critical functions.
In statistical hypotheses testing, a critical function $$\delta(X)$$ represents the probability, as a function of the observation X, that the null hypothesis is rejected. Usually, one considers only non-randomized tests (i.e. tests that depend only on the observations X), and the critical function takes only the values 0 and 1 (corresponding to accepting or rejecting the null hypothesis). In a randomized test, it may take any value between 0 and 1. The advantage of introducing critical functions and randomized tests in the present context is that it allows significance levels to be set exactly for discrete problems, such as Poisson distributions.

In terms of critical functions, the significance level of the test is $$E_0 \delta(X) $$, while the power is $$E_1 \delta(X)$$, where $$E_0$$ and $$E_1$$ denote expectation with repect to the hypothesized distributions.

The critical function for the Neyman-Peason LRT with specified significance level $$\alpha$$ is

\delta(X) = \begin{cases} 1 & f_1(X) > c f_0(X) \\ p & f_1(X) = cf_0(X) \\ 0 & f_1(X) < cf_0(X) \end{cases} $$ where p and c are determined by the requirement $$E_0(\delta(X))=\alpha$$ (in most cases with continuous data, p is arbitrary).

The Neyman-Pearson Lemma can be stated as follows: Let $$\delta(X)$$ be the critical function for the level-$$\alpha$$ likelihood ratio test, and $$\delta^{\star}(X)$$ be the critical function for any other level-$$\alpha$$ test (i.e. $$E_0 \delta^{\star}(X) =\alpha$$). Then

E_1 \delta(X) \ge E_1 \delta^{\star}(X). $$

Example: Poisson distribution.
Suppose that X is a single observation from a Poisson distribution with mean $$\lambda$$, and one wishes to test the hypothesis $$H_0:\lambda=\lambda_0$$ against the alternative $$\lambda=\lambda_1$$. Suppose that $$\lambda_1 > \lambda_0$$.

The likelihood ratio test statistic is

\frac{f_1(X)}{f_0(X)} = \frac{ \lambda_1^X e^{-\lambda_1} / X! }{ \lambda_0^X e^{-\lambda_0} / X! } = \left ( \frac{\lambda_1}{\lambda_0} \right )^X e^{-(\lambda_1-\lambda_0) }. $$ This is a monotone increasing function of X, and so the Neyman-Pearson test rejects for large values of X. The critical value is determined by the specified significance level $$\alpha$$.

To be specific, suppose $$\lambda_0=5$$, $$\lambda_1=10$$, and $$\alpha=0.1$$. Poisson tables show that if the null hypothesis is true, $$P_0(X > 8)=0.06809$$, and $$P_0(X=8) = 0.06528$$. The Neyman-Pearson test rejects $$H_0$$ whenever X>8, and with probability p when X=8. p is determined by the equation

0.1 = 0.06809 + p \times 0.06528; $$ or p=0.4888. The critical function for the Neyman-Pearson test is

\delta(X) = \begin{cases} 1 & X>8 \\ 0.4888 & X=8 \\ 0 & X<8 \end{cases}. $$

Following this derivation carefully, one notes that the rule depends only on $$\lambda_0=5$$. The value of $$\lambda_1$$ plays no role, beyond the requirement $$\lambda_1>\lambda_0$$. It follows that this test is the Neyman-Pearson test for any $$\lambda_1>5$$, and is therefore uniformly most powerful for the composite alternative $$H_1: \lambda > 5$$.

Example: Relation to Investment
Suppose that one has $20 to invest. There are seven possible investments (denoted by A to G), with costs and return as follows:

Obviously, investing the money in E, F or G is a bad strategy, since they're money losers. If the $20 is invested in B and C, the return is $12+$20=$32. If the investment is in A and D, the return is $35.

However, the best strategy for a $20 investment, if fractional shares are allowed, is C plus two-thirds of D. The return is then 20+⅔×27=38 dollars. Simply, C is the highest-returning investment (i.e. highest ratio), followed by D. The best strategy is to invest where the rates of return are highest.

This is exactly the Neyman-Pearson Lemma. Interpret A to G as the possible values of the random variable; the "cost/100" as the probability distribution under the null hypothesis, and "return/100" as the probability distribution under the alternative. The $20 to invest is the significance level, while the return of the chosen options is the power. The Neyman-Pearson lemma says to invest your money (or significance level) in the options where the rate-of-return (return/cost; or likelihood ratio) is highest.

Proof of the Neyman-Pearson Lemma
By definition of expectation, $$E_1 \delta(X) -E_1 \delta^{\star}(X) =\int (\delta(x)-\delta^{\star}(x))f_1(x)dx.$$

Now, from the claim (see below) that for all x, $$(\delta(x)-\delta^{\star}(x))f_1(x) \ge (\delta(x)-\delta^{\star}(x)) c f_0(x)$$, it follows that

\begin{align} E_1\delta(X)-E_1 \delta^{\star}(X) &\ge c \int (\delta(x)-\delta^{\star}(x)) f_0(x) dx \\ &= c (E_0 \delta(X) - E_0 \delta^{\star}(X)) \\ &= c (\alpha-\alpha) \\ &= 0 \end{align} $$ which completes the result.

To establish the claim, consider x in three regions:
 * 1) For $$f_1(x)>c f_0(x)$$, $$\delta(x)=1$$ and $$\delta(x)-\delta^{\star}(x)\ge 0$$. The claim holds trivially.
 * 2) For $$f_1(x)=c f_0(x)$$, then both sides of the claim are equal.
 * 3) For $$f_1(x)<c f_0(x)$$, $$\delta(x)=0$$ and $$\delta(x)-\delta^{\star}(x)\le 0$$. The claim still holds, since a negative number times `less than' yields a `greater than'.

Note: For discrete distributions, replace integrals by sums. More generally, measure theoreticians can use densities with respect to a measure $$\mu$$, and integrate with respect to $$\mu(dx)$$.