Tobit model

In statistics, a tobit model is any of a class of regression models in which the observed range of the dependent variable is censored in some way. The term was coined by Arthur Goldberger in reference to James Tobin, who developed the model in 1958 to mitigate the problem of zero-inflated data for observations of household expenditure on durable goods. Because Tobin's method can be easily extended to handle truncated and other non-randomly selected samples, some authors adopt a broader definition of the tobit model that includes these cases.

Tobin's idea was to modify the likelihood function so that it reflects the unequal sampling probability for each observation depending on whether the latent dependent variable fell above or below the determined threshold. For a sample that, as in Tobin's original case, was censored from below at zero, the sampling probability for each non-limit observation is simply the height of the appropriate density function. For any limit observation, it is the cumulative distribution, i.e. the integral below zero of the appropriate density function. The tobit likelihood function is thus a mixture of densities and cumulative distribution functions.

The likelihood function
Below are the likelihood and log likelihood functions for a type I tobit. This is a tobit that is censored from below at $$ y_L $$ when the latent variable $$ y_j^* \leq y_L $$. In writing out the likelihood function, we first define an indicator function $$ I $$:


 * $$ I(y) = \begin{cases}

0 & \text{if } y \leq y_L, \\ 1  & \text{if } y > y_L. \end{cases}$$

Next, let $$ \Phi $$ be the standard normal cumulative distribution function and $$ \varphi $$ to be the standard normal probability density function. For a data set with N observations the likelihood function for a type I tobit is


 * $$ \mathcal{L}(\beta, \sigma) = \prod _{j=1}^N \left(\frac{1}{\sigma}\varphi \left(\frac{y_j-X_j\beta }{\sigma} \right)\right)^{I(y_j)} \left(1-\Phi \left(\frac{X_j\beta-y_L}{\sigma}\right)\right)^{1-I(y_j)}$$

and the log likelihood is given by
 * $$\begin{align}

\log \mathcal{L}(\beta, \sigma) &= \sum^n_{j = 1} I(y_j) \log \left( \frac{1}{\sigma} \varphi\left( \frac{y_j - X_j\beta}{\sigma} \right) \right) + (1 - I(y_j)) \log\left( 1- \Phi\left( \frac{X_j \beta - y_L}{\sigma} \right) \right) \\ &= \sum_{y_j>y_L} \log \left( \frac{1}{\sigma} \varphi\left( \frac{y_j - X_j\beta}{\sigma} \right) \right) + \sum_{y_j=y_L} \log\left( \Phi\left( \frac{ y_L - X_j \beta}{\sigma} \right) \right) \end{align}$$

Reparametrization
The log-likelihood as stated above is not globally concave, which complicates the maximum likelihood estimation. Olsen suggested the simple reparametrization $$\beta = \delta/\gamma$$ and $$\sigma^2 = \gamma^{-2}$$, resulting in a transformed log-likelihood,


 * $$\log \mathcal{L}(\delta, \gamma) = \sum_{y_j>y_L} \left\{ \log \gamma + \log \left[ \varphi\left( \gamma y_j - X_j \delta \right) \right] \right\} + \sum_{y_j=y_L} \log\left[ \Phi\left( \gamma y_L - X_j \delta \right) \right]$$

which is globally concave in terms of the transformed parameters.

For the truncated (tobit II) model, Orme showed that while the log-likelihood is not globally concave, it is concave at any stationary point under the above transformation.

Consistency
If the relationship parameter $$\beta$$ is estimated by regressing the observed $$ y_i $$ on $$ x_i $$, the resulting ordinary least squares regression estimator is inconsistent. It will yield a downwards-biased estimate of the slope coefficient and an upward-biased estimate of the intercept. Takeshi Amemiya (1973) has proven that the maximum likelihood estimator suggested by Tobin for this model is consistent.

Interpretation
The $$\beta$$ coefficient should not be interpreted as the effect of $$x_i$$ on $$y_i$$, as one would with a linear regression model; this is a common error. Instead, it should be interpreted as the combination of (1) the change in $$y_i$$ of those above the limit, weighted by the probability of being above the limit; and (2) the change in the probability of being above the limit, weighted by the expected value of $$y_i$$ if above.

Variations of the tobit model
Variations of the tobit model can be produced by changing where and when censoring occurs. classifies these variations into five categories (tobit type I – tobit type V), where tobit type I stands for the first model described above. Schnedler (2005) provides a general formula to obtain consistent likelihood estimators for these and other variations of the tobit model.

Type I
The tobit model is a special case of a censored regression model, because the latent variable $$y_i^*$$ cannot always be observed while the independent variable $$ x_i $$ is observable. A common variation of the tobit model is censoring at a value $$ y_L$$ different from zero:


 * $$ y_i = \begin{cases}

y_i^* & \text{if } y_i^* >y_L, \\ y_L  & \text{if } y_i^* \leq y_L. \end{cases}$$

Another example is censoring of values above $$ y_U$$.


 * $$ y_i = \begin{cases}

y_i^* & \text{if } y_i^* <y_U, \\ y_U  & \text{if } y_i^* \geq y_U. \end{cases}$$

Yet another model results when $$ y_i $$ is censored from above and below at the same time.


 * $$ y_i = \begin{cases}

y_i^* & \text{if } y_L0, \\ 0  & \text{if } y_{1i}^* \leq 0. \end{cases}$$

In Type I tobit, the latent variable absorbs both the process of participation and the outcome of interest. Type II tobit allows the process of participation (selection) and the outcome of interest to be independent, conditional on observable data.

The Heckman selection model falls into the Type II tobit, which is sometimes called Heckit after James Heckman.

Type III
Type III introduces a second observed dependent variable.
 * $$ y_{1i} = \begin{cases}

y_{1i}^* & \text{if } y_{1i}^* >0, \\ 0  & \text{if } y_{1i}^* \leq 0. \end{cases}$$
 * $$ y_{2i} = \begin{cases}

y_{2i}^* & \text{if } y_{1i}^* >0, \\ 0  & \text{if } y_{1i}^* \leq 0. \end{cases}$$ The Heckman model falls into this type.

Type IV
Type IV introduces a third observed dependent variable and a third latent variable.
 * $$ y_{1i} = \begin{cases}

y_{1i}^* & \text{if } y_{1i}^* >0, \\ 0  & \text{if } y_{1i}^* \leq 0. \end{cases}$$
 * $$ y_{2i} = \begin{cases}

y_{2i}^* & \text{if } y_{1i}^* >0, \\ 0  & \text{if } y_{1i}^* \leq 0. \end{cases}$$
 * $$ y_{3i} = \begin{cases}

y_{3i}^* & \text{if } y_{1i}^* \leq0, \\ 0  & \text{if } y_{1i}^*  <0. \end{cases}$$

Type V
Similar to Type II, in Type V only the sign of $$y_{1i}^*$$ is observed.
 * $$ y_{2i} = \begin{cases}

y_{2i}^* & \text{if } y_{1i}^* >0, \\ 0  & \text{if } y_{1i}^* \leq 0. \end{cases}$$
 * $$ y_{3i} = \begin{cases}

y_{3i}^* & \text{if } y_{1i}^* \leq 0, \\ 0  & \text{if } y_{1i}^* > 0. \end{cases}$$

Non-parametric version
If the underlying latent variable $$y_i^*$$ is not normally distributed, one must use quantiles instead of moments to analyze the observable variable $$y_i$$. Powell's CLAD estimator offers a possible way to achieve this.

Applications
Tobit models have, for example, been applied to estimate factors that impact grant receipt, including financial transfers distributed to sub-national governments who may apply for these grants. In these cases, grant recipients cannot receive negative amounts, and the data is thus left-censored. For instance, Dahlberg and Johansson (2002) analyse a sample of 115 municipalities (42 of which received a grant). Dubois and Fattore (2011) use a tobit model to investigate the role of various factors in European Union fund receipt by applying Polish sub-national governments. The data may however be left-censored at a point higher than zero, with the risk of mis-specification. Both studies apply Probit and other models to check for robustness. Tobit models have also been applied in demand analysis to accommodate observations with zero expenditures on some goods. In a related application of tobit models, a system of nonlinear tobit regressions models has been used to jointly estimate a brand demand system with homoscedastic, heteroscedastic and generalized heteroscedastic variants.