User:Ogo/MLRP

 A monotonic likelihood ratio in distributions $$f(x)$$ and $$g(x)$$ The ratio of the density functions above is increasing in the parameter $$x$$, so $$f(x)$$/$$g(x)$$ satisfies the monotone likelihood ratio property.

The monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions $$f(x)$$ and $$g(x)$$ bear the property if
 * for any $$x_1 > x_0$$,  $$\frac{f(x_1)}{g(x_1)} \geq \frac{f(x_0)}{g(x_0)}$$

that is, if the ratio is nondecreasing in the argument $$x$$.

If the functions are first-differentiable, the property may sometimes be stated
 * $$\frac{\partial}{\partial x} \left( \frac{f(x)}{g(x)} \right) \geq 0$$

For two distributions that satisfy the definition with respect to some argument x, we say they "have the MLRP in x." For a family of distributions that all satisfy the definition with respect to some statistic T(X), we say they "have the MLR in T(X)."

Intuition
The MLRP is used to represent a data-generating process that enjoys a straightforward relationship between the magnitude of some observed variable and the distribution it draws from. If $$f(x)$$ satisfies the MLRP with respect to $$g(x)$$, the higher the observed value $$x$$, the more likely it was drawn from distribution $$f$$ rather than $$g$$. As usual for monotonic relationships, the likelihood ratio's monotonicity comes in handy in statistics, particularly when using maximum-likelihood estimation. Also, distribution families with MLR bear a number well-behaved stochastic properties, such as first-order stochastic dominance and increasing hazard ratios. Unfortunately, as is also usual, the strength of this assumption comes at the price of realism. Many processes in the world do not exhibit a monotonic correspondence between input and output.

Example: Working hard or slacking off
Suppose you are working on a project, and you can either work hard or slack off. Call your choice $$a$$ and the quality of the resulting project $$q$$. If the MLRP holds for the distribution of q conditional on your effort a, the higher the quality the more likely you worked hard. Conversely, the lower the quality the more likely you slacked off.


 * 1) Choose effort $$e \in \{H,L\}$$ where H means high, L means low
 * 2) Observe $$q$$ drawn from $$f(q|e)$$. By Bayes' law with a uniform prior,
 * $$Pr[e=H|q]=\frac{f(q|H)}{f(q|H)+f(q|L)}$$
 * 1) Suppose $$f(q|e)$$ satisfies the MLRP. Rearranging, the probability the worker worked hard is
 * $$\frac{1}{1+f(q|L)/f(q|H)}$$

which, thanks to the MLRP, is monotonically increasing in $$q$$. Hence if some employer is doing a "performance review" he can infer his employee's behavior from the merits of his work.

Families of distributions satisfying MLR
Statistical models very often assume the data is generated by some family of probability distributions and try to estimate which family member generated the data. It is an easy estimate if every member of the family satisfies the MLRP (with respect to some statistic).

A family of density functions $$\{ f_\theta (x)\}_{\theta\in \Theta}$$ indexed by a parameter $$\theta$$ taking values in a set $$\Theta$$ is said to have a monotone likelihood ratio (MLR) in the statistic $$T(X)$$ if for any $$\theta_1 < \theta_2$$,
 * $$\frac{f_{\theta_2}(X=x_1,x_2,x_3...)}{f_{\theta_1}(X=x_1,x_2,x_3...)} $$ is a non-decreasing function of $$T(X)$$.

Then we say the family of distributions "has MLR in $$T(X)$$".

Hypothesis Testing
If the family of random variables has the MLRP in $$T(X)$$, a uniformly most powerful test can easily be determined for the hypotheses $$H_0 : \theta $$ ≤ $$ \theta_0$$ versus $$H_1 : \theta > \theta_0$$.

Example:Effort and output
Example: Let $$e$$ ("effort") be an input variable into a stochastic production function, and $$y$$ be the random variable that represents output. Let $$f(y;e) $$ be the pdf of $$y$$ for each  $$e$$. Then the monotone likelihood ratio property (MLRP) of $$f$$ is expressed as follows: for any  $$e_1,e_2$$, the fact that  $$e_2 > e_1$$ implies that the ratio  $$f(y;e_2)/f(y;e_1)$$ is increasing in  $$y$$.

Relation to other statistical properties
If a family of distributions $$f_\theta(x)$$ has the monotone likelihood ratio property in $$T(X)$$,
 * 1) the family has monotone decreasing hazard rates in $$\theta$$ (note: not necessarily $$T(X)$$)
 * 2) the family exhibits first-order stochastic dominance in $$\theta$$ (and thereby second-order stochastic dominance as well)
 * 3) and the best Bayesian update of $$\theta$$ is increasing in $$T(X)$$.

But it does not go the other way. Monotone hazard rates do not imply the MLRP nor does any sort of stochastic dominance. Proofs of the implication follow.

Proofs
Let distribution family $$f_\theta$$ satisfy MLR in x, so that for $$\theta_1>\theta_0$$ and $$x_1>x_0$$:
 * $$\frac{f_{\theta_1}(x_1)}{f_{\theta_0}(x_1)} \geq \frac{f_{\theta_1}(x_0)}{f_{\theta_0}(x_0)}$$

and bring up the denominators to get something easier to work with

$$f_{\theta_1}(x_1) f_{\theta_0}(x_0) \geq f_{\theta_1}(x_0) f_{\theta_0}(x_1)$$.

Integrate the MLRP condition above twice:

First-order stochastic dominance
Combine the two inequalities above to get first-order dominance:
 * $$F_{\theta_1}(x) \leq F_{\theta_0}(x) \ \forall x$$

Monotone hazard rate
Use only the second inequality above to get a monotone hazard rate:
 * $$\frac{f_{\theta_1}(x)}{1-F_{\theta_1}(x)} \leq \frac{f_{\theta_0}(x)}{1-F_{\theta_0}(x)} \ \forall x $$

Economics
The MLR is an important condition on the type distribution of agents in mechanism design. Most solutions to mechanism design models assume a type distribution to satisfy the MLR to take advantage of a common solution method.