User:Elleyang/sandbox

In statistics, an outlier is an observation point that is distant from other observations. An outlier may be due to variability in the measurement, or it may indicate experimental error; in the latter case, they are sometimes excluded from the data set. An outlier can cause serious problems in statistical analyses.

Outliers can occur by chance in any distribution, but they often indicate either a measurement error or a heavy-tailed distribution in the population. In the former case, one could discard them or use statistics that are robust to outliers; in the latter case, they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. Outliers are frequently caused by mixing two distributions, which may come from two distinct sub-populations, and may indicate a "correct trial" versus "measurement error", which can be modeled by a mixture model.

In most large samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error, flaws in the theory that generated an assumed family of probability distributions, or some observations that are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where certain theories might not be valid. However, in large samples, a small number of outliers is to be expected and not due to any anomalous condition.

Outliers, being the most extreme observations, may include the sample maximum, sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations.

Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the average temperature of 10 objects in a room, and nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 °C and 25 °C, but the mean temperature will be between 35.5 °C and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean. Naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set.

Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not. However, the mean is generally more precise estimator.

Occurrence and causes
In the case of normally distributed data, the three sigma rule means that roughly 1 in 22 observations will differ by twice the standard deviation or more from the mean, and 1 in 370 will deviate by three times the standard deviation. In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution – and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number.

In general, if the nature of the population distribution is known a priori, it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability p) of a given distribution, the number of outliers will follow a binomial distribution with parameter p, which can generally be well-approximated by the Poisson distribution with λ = pn. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, p is approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3.

Causes
Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (King effect).

Univariate Detection
In univariate models, a response variable $$Y$$ is fit to one explanatory variable $$X$$.

There is no rigid mathematical definition of what constitutes an outlier, so determining whether an observation is an outlier is ultimately subjective. There are various methods of outlier detection, through graphical or model based methods.

Graphical-based methods commonly include box plots.

Model-based methods assume that the data are from a normal distribution and identify observations which are deemed "unlikely" based on a measure of mean and standard deviation:
 * Chauvenet's criterion
 * Grubbs' test for outliers
 * Dixon's Q test
 * ASTM E178 Standard Practice for Dealing With Outlying Observations
 * Mahalanobis distance and leverage are often used to detect outliers, especially in the development of linear regression models.

Peirce's criterion
It is proposed to determine in a series of $$m$$ observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as $$n$$ such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from A Manual of Astronomy 2:558 by Chauvenet.)

Tukey's fences
Other methods flag observations based on measures such as the interquartile range. For example, if $$Q_1$$ and $$Q_3$$ are the lower and upper quartiles respectively, then one could define an outlier to be any observation outside the range:
 * $$ \big[ Q_1 - k (Q_3 - Q_1 ), Q_3 + k (Q_3 - Q_1 ) \big]$$

for some nonnegative constant $$k$$. John Tukey proposed this test, where $$k=1.5$$ indicates an "outlier", and $$k=3$$ indicates data that is "far out".

In anomaly detection
In the data mining task of anomaly detection, other approaches are distance-based and density-based such as Local Outlier Factor, and most of them use the distance to the k-nearest neighbors to label observations as outliers or non-outliers.

Modified Thompson Tau test
The modified Thompson Tau test is a method used to determine if an outlier exists in a data set. The strength of this method lies in the fact that it takes into account a data set’s standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier. Note: Although intuitively appealing, this method appears to be unpublished (it is not described in Thompson (1985) ) and one should use it with caution.

How it works: First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula: $$\text{Rejection Region}= \frac{\sqrt{n}\sqrt{n-2+{t_{\alpha/2}^2}}} $$; where $${t_{\alpha/2}}$$ is the critical value from the Student t distribution, n is the sample size, and s is the sample standard deviation. To determine if a value is an outlier: Calculate δ = |(X - mean(X)) / s|. If δ > Rejection Region, the data point is an outlier. If δ ≤ Rejection Region, the data point is not an outlier.

The modified Thompson Tau test is used to find one outlier at a time (largest value of δ is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set.

Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified ($$1-p(y|x)$$ where $$y$$ is the assigned class label and $$x$$ represent the input attribute value for an instance in the training set $$t$$). Ideally, instance hardness would be calculated by summing over the set of all possible hypotheses $$H$$:

$$\begin{align}IH(\langle x, y\rangle) &= \sum_H (1 - p(y, x, h))p(h|t)\\ &= \sum_H p(h|t) - p(y, x, h)p(h|t)\\ &= 1- \sum_H p(y, x, h)p(h|t).\end{align}$$

Practically, this formulation is unfeasible as $$H$$ is potentially or infinite and calculating $$p(h|t)$$ is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subset $$L \subset H$$:

$$IH_L (\langle x,y\rangle) = 1 - \frac{1}{|L|} \sum_{j=1}^{|L|} p(y|x, g_j(t, \alpha)$$

where $$g_j(t, \alpha)$$ is the hypothesis induced by learning algorithm $$g_j$$ trained on training set $$t$$ with hyperparameters $$\alpha$$. Instance hardness provides a continuous value for determining if an instance is an outlier instance.

Multivariate Detection
In multivariate models, a response variable $$Y$$ is fit to multiple explanatory variables $$X$$s.

While there is no rigid mathematical definition of what constitutes an outlier, an outlying point is marked by the unusualness of its $$X$$ values or by the unusualness of its $$Y$$ value conditional on its $$X$$ values.

The unusualness is quantified by leverage and discrepancy, respectively. In linear regression, leverage measures the unusualness of an observation's $$X$$ values by calculating the distance between its $$X$$s and the $$X$$s of the remaining observations. In linear regression, discrepancy measures the unusualness of an observation's $$Y$$ value conditional on its $$X$$ values by calculating the observation's residual.

Influence is a value that combines leverage and discrepancy to detect outliers by measuring the "influence" of an observation on the fit of a model. The heuristic formula distinguishing influence, leverage, and discrepancy is: influence = leverage * discrepancy.



Cook's Distance
Cook's distance is commonly used to estimate the influence of a data point while performing a least-squares regression analysis.

In 1977, Cook proposed to measure the "distance" between the predicted least-squares estimate $$\hat{\beta}$$ and the predicted least-squares estimate when the $$i$$th subject is removed $$\hat{\beta}_{[i]}$$. His approach produced a measure independent of the scales of the explanatory variables that relies on the Mahalanobis distance, which corresponds to the number of standard deviations a point is away from the mean of its distribution.

Cook's distance is traditionally expressed as follows:

$$C_i = \frac{(\hat{\beta} - \hat{\beta}_{[i]})^T X^{T}X (\hat{\beta} - \hat{\beta}_{[i]})}{(p + 1)\hat{\sigma}^2}$$

where $$X$$ is the design matrix of size $$n$$ x $$(p + 1)$$, $$n$$ is the total number of observations, $$p$$ is the number of explanatory variables, and $$\hat{\sigma}$$ is the estimated residual standard error.

To directly see Cook's distance as a measure of influence, it can alternatively be expressed as:

$$C_i = \frac{h_i}{1 - h_i} * \frac{r_i^2}{p+1}$$

or:

$$C_i = \frac{1}{(\frac{1}{h_i} - 1)(p + 1)} * r_i^2$$

where $$h_i$$ is the leverage, $$p$$ is the number of explanatory variables, and $$r_i$$ is the standardized residual. Cook's distance is indeed a measure of influence equal to the product of leverage and discrepancy, as $$\frac{1}{(\frac{1}{h_i} - 1)(p + 1)}$$ describes the leverage and $$r_i^2$$ describes the discrepancy.

Cook's distance will be large when either the discrepancy or the leverage is large. A data point may be an outlier if it has a large Cook's distance, especially when compared to the Cook's distance of other points in the data set.



Residuals for Diagnosing Outliers in Linear Regression Models
In linear regression, the observations are modeled according to:

$$Y = X \beta + e $$, with $$e$$ ~ $$N(0, \sigma^2 I_n)$$

where $$Y$$ is a vector of the observed response variables, $$X$$ is the $$n$$ x $$(p + 1)$$ design matrix, $$n$$ is the total number of observations, $$p$$ is the number of explanatory variables, and $$e$$ is the error. The model makes the following assumptions:


 * 1) Residuals are independent
 * 2) The expected value of errors is zero
 * 3) The variance is constant
 * 4) Errors are normally distributed

Looking at residuals is a common way of detecting outliers. The general formulation of the residual is a common way of quantifying the unusualness of the response variable $$Y$$ given the explanatory variable $$X$$s.

Residual
The residual is the difference between the observed value of the response variable for the $$i$$th subject and the predicted value of the $$i$$th subject.

The residual for the $$i$$th subject is denoted:

$$\hat{e}_i = y_i - \hat{y}_i$$

where $$y_i$$ is observed value of the response variable for the $$i$$th subject and $$\hat{y}_i$$ is the predicted value of the $$i$$th subject.

Under the linear regression model assumptions, the residuals are normally distributed with a distribution of $$N(0, \sigma^2 (I - X(X^T X)^{-1} X^T)))$$, where $$X$$ is the design matrix for the linear regression. It is important to note that residuals are correlated and have different variances. Even if the errors under the assumptions of a general linear model have equal variances, the statement is not typically true for residuals. Residuals have a variance of $$\sigma^2 (I - X(X^T X)^{-1} X^T))$$, which is equivalent to $$\sigma^2 (I - H)$$, where $$H$$ is the hat matrix containing leverage values along its diagonal. Therefore, the variance of the residual for the $$i$$th observation is: $$Var(\hat{e}_i) = \sigma^2_\epsilon * (1 - h_i)$$, where $$\hat{e}_i$$ is the residual, $$\sigma^2_\epsilon$$ is the error variance, and $$h_i$$ is the leverage.

Thus, observations with high leverage tend to have smaller residuals. This makes sense intuitively, as these observations can pull the regression surface towards them.

Standardized Residuals
Standardized residuals are also known as studentized residuals. Standardized residuals are used to compare residuals on the same covariance scale.

The standardized residual of the $$i$$th subject is denoted:

$$r_i = \frac{\hat{e}_i}{\hat{\sigma}\sqrt{1 - h_i}}$$

where $$\hat{\sigma}$$ is the estimated variance of the response variable and $$\hat{e}_i$$ is the residual and $$h_i$$ is the leverage of the $$i$$th subject.

Note that because the numerator and denominator are not independent, standardized residuals do not follow a $$t$$-distribution.

Predicted Residuals
Predicted residuals are used to accommodate the correlation of error estimates with the residuals. It is calculated from leave-one-out analysis, a form of cross-validation in which regression analyses are successively run with an observation left out.

The predicted residual for the $$i$$th subject is denoted:

$$\hat{e}_{[i]} = y_i - x_i^T \hat{\beta}_{[i]}$$

where $$x_i^T$$ is the $$i$$th row of the original design matrix $$X$$ and $$\hat{\beta}_{[i]}$$ is the linear regression estimate after deleting the $$i$$th row.

Under the assumptions for a linear model, predicted residuals are normally distributed, with a distribution of $$\hat{e}_{[i]}$$ ~ $$N(0, \sigma^2 (1 + x_i^T(X_{[i]}^T X_{[i]})^{-1} x_i)))$$.

Externally Studentized Residuals
Externally studentized residuals are also known as standardized predicted residuals, or simply studentized residuals. The measure accounts for the different covariance scale in predicted residuals.

The studentized residual for the $$i$$th subject is denoted:

$$t_i = \frac{\hat{e}_{[i]} \sqrt{1 - h_i}}{\sqrt{RSS_{[i]} / (n - p - 2)}} = \frac{\hat{e}}{\hat{\sigma}_{[i]}^2\sqrt{1 - h_i}}$$

where $$n$$ is the total number of subjects, $$p$$ is the number of explanatory variables, $$\hat{e}_{[i]}$$ is predicted residual, and $$h_i$$ is the leverage of the $$i$$th subject. $$RSS_{[i]}$$ is the residual sum of squares and $$\hat{\sigma}_{[i]}^2$$ is the estimated variance of the response variable after deleting the $$i$$th subject.

The studentized residuals $$t_i$$ are $$t$$-distributed with $$n - p - 2$$ degrees of freedom.

Working with outliers
The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notably estimation of covariance matrices.

Residuals
It only makes sense to work with residuals using a linear regression model in which observations are modeled according to:

$$Y = X \beta + e $$, with $$e$$ ~ $$N(0, \sigma^2 I_n)$$

where $$Y$$ is a vector of the observed response variables, $$X$$ is the $$n$$ x $$(p + 1)$$ design matrix, $$n$$ is the total number of observations, $$p$$ is the number of explanatory variables, and $$e$$ is the error. The model makes the following assumptions:


 * 1) Residuals are independent
 * 2) The expected value of errors is zero
 * 3) The variance is constant
 * 4) Errors are normally distributed

Under the assumptions stated, the traditional formulation of the residual is a common way of quantifying the unusualness of a response variable given the explanatory variables. However, because the residual are correlated and have different variances, it often makes sense to work with standardized residuals, predicted residuals, or studentized residuals.

Standardized residuals are used to get around the different variances within the residuals. Predicted residuals are used to get around the correlation within the residuals. Studentized residuals are used to get around both the correlation and different variances within the residuals.

Retention
Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case. The application should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points.

Exclusion
Deletion of outlier data is a controversial practice frowned upon by many scientists and science instructors; while mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. An outlier resulting from an instrument reading error may be excluded but it is desirable that the reading is at least verified.

The two common approaches to exclude outliers are truncation (or trimming) and Winsorising. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data. Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting in censored data.

In regression problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as Cook's distance.

If a data point (or points) is excluded from the data analysis, this should be clearly stated on any subsequent report.

Non-normal distributions
The possibility should be considered that the underlying distribution of the data is not approximately normal, having "fat tails". For instance, when sampling from a Cauchy distribution, the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values.

Set-membership uncertainties
A set membership approach considers that the uncertainty corresponding to the ith measurement of an unknown random vector x is represented by a set Xi (instead of a probability density function). If no outliers occur, x should belong to the intersection of all Xi's. When outliers occur, this intersection could be empty, and we should relax a small number of the sets Xi (as small as possible) in order to avoid any inconsistency. This can be done using the notion of q-relaxed intersection. As illustrated by the figure, the q-relaxed intersection corresponds to the set of all x which belong to all sets except q of them. Sets Xi that do not intersect the q-relaxed intersection could be suspected to be outliers.

Alternative models
In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a hierarchical Bayes model, or a mixture model.