Welch's t-test

In statistics, Welch's t-test, or unequal variances t-test, is a two-sample location test which is used to test the (null) hypothesis that two populations have equal means. It is named for its creator, Bernard Lewis Welch, and is an adaptation of Student's t-test, and is more reliable when the two samples have unequal variances and possibly unequal sample sizes. These tests are often referred to as "unpaired" or "independent samples" t-tests, as they are typically applied when the statistical units underlying the two samples being compared are non-overlapping. Given that Welch's t-test has been less popular than Student's t-test and may be less familiar to readers, a more informative name is "Welch's unequal variances t-test" — or "unequal variances t-test" for brevity.

Assumptions
Student's t-test assumes that the sample means being compared for two populations are normally distributed, and that the populations have equal variances. Welch's t-test is designed for unequal population variances, but the assumption of normality is maintained. Welch's t-test is an approximate solution to the Behrens–Fisher problem.

Calculations
Welch's t-test defines the statistic t by the following formula:


 * $$t = \frac{\Delta\overline{X}}{s_{\Delta\bar{X}}} = \frac{\overline{X}_1 - \overline{X}_2}{\sqrt{ {s_{\bar{X}_1}^2} + {s_{\bar{X}_2}^2} }}\,$$


 * $$s_{\bar{X}_i} = {s_i \over \sqrt{N_i}} \,$$

where $$\overline{X}_i$$ and $$s_{\bar{X}_i}$$ are the $$i^\text{th}$$ sample mean and its standard error, with $$s_i$$ denoting the corrected sample standard deviation, and sample size $$N_i$$. Unlike in Student's t-test, the denominator is not based on a pooled variance estimate.

The degrees of freedom $$\nu$$ associated with this variance estimate is approximated using the Welch–Satterthwaite equation:



\nu \quad \approx \quad \frac{\left( \; \frac{s_1^2}{N_1} \; + \; \frac{s_2^2}{N_2} \; \right)^2 } { \quad \frac{s_1^4}{N_1^2 \nu_1} \; + \; \frac{s_2^4}{N_2^2 \nu_2 } \quad }. $$

This expression can be simplified when $$N_1 = N_2$$:

\nu \approx \frac {s_{\Delta\bar{X}}^4} {\nu_1^{-1} s_{\bar{X}_1}^4 + \nu_2^{-1} s_{\bar{X}_2}^4}. $$

Here, $$\nu_i = N_i-1$$ is the degrees of freedom associated with the i-th variance estimate.

The statistic is approximately from the t-distribution since we have an approximation of the chi-square distribution. This approximation is better done when both $$N_1$$ and $$N_2$$ are larger than 5.

Statistical test
Once t and $$\nu$$ have been computed, these statistics can be used with the t-distribution to test one of two possible null hypotheses: The approximate degrees of freedom are real numbers $$\left(\nu\in\mathbb{R}^+\right)$$ and used as such in statistics-oriented software, whereas they are rounded down to the nearest integer in spreadsheets.
 * that the two population means are equal, in which a two-tailed test is applied; or
 * that one of the population means is greater than or equal to the other, in which a one-tailed test is applied.

Advantages and limitations
Welch's t-test is more robust than Student's t-test and maintains type I error rates close to nominal for unequal variances and for unequal sample sizes under normality. Furthermore, the power of Welch's t-test comes close to that of Student's t-test, even when the population variances are equal and sample sizes are balanced. Welch's t-test can be generalized to more than 2-samples, which is more robust than one-way analysis of variance (ANOVA).

It is not recommended to pre-test for equal variances and then choose between Student's t-test or Welch's t-test. Rather, Welch's t-test can be applied directly and without any substantial disadvantages to Student's t-test as noted above. Welch's t-test remains robust for skewed distributions and large sample sizes. Reliability decreases for skewed distributions and smaller samples, where one could possibly perform Welch's t-test.