User:Stephen.floor/sandbox

Cliff's delta or $$d$$ is a statistical measure of effect size, or how different two distributions are. It was originally developed by Norman Cliff for use with ordinal data. In short, $$d$$ is a measure of how often one the values in one distribution are larger than the values in a second distribution. Crucially, it does not require any assumptions about the shape or spread of the two distributions.

The sample estimate $$d$$ is given by:

$$d = \frac{\#(x_i > x_j) - \#(x_i < x_j)}{mn}$$

where the two distributions are of size $$n$$ and $$m$$ with items $$x_i$$ and $$x_j$$, respectively, and $$\#$$ is defined as the number of times.

$$d$$ is a linear transformation of the Mann-Whitney U statistic, however it also captures the direction of the difference in its sign. Given the Mann-Whitney $$U$$, $$d$$ is:

$$d = \frac{2U}{mn} - 1$$

The R package orddom calculates $$d$$ as well as bootstrap confidence intervals.