Average variance extracted



In statistics (classical test theory), average variance extracted (AVE) is a measure of the amount of variance that is captured by a construct in relation to the amount of variance due to measurement error.

History
The average variance extracted was first proposed by Fornell & Larcker (1981).

Calculation
The average variance extracted can be calculated as follows:


 * $$ \text{AVE} = \frac{ \sum_{i=1}^k \lambda_i^2 }{ \sum_{i=1}^k \lambda_i^2 + \sum_{i=1}^k \operatorname{Var}(e_i) }$$

Here, $$k$$ is the number of items, $$\lambda_i$$ the factor loading of item $$i$$ and $$\operatorname{Var}( e_i )$$ the variance of the error of item $$i$$.

Role for assessing discriminant validity
The average variance extracted has often been used to assess discriminant validity based on the following "rule of thumb": the positive square root of the AVE for each of the latent variables should be higher than the highest correlation with any other latent variable. If that is the case, discriminant validity is established at the construct level. This rule is known as Fornell–Larcker criterion. However, in simulation models this criterion did not prove reliable for composite-based structural equation models (e.g., PLS-PM), but indeed proved to be reliable for factor-based structural equation models (e.g., Amos, PLSF-SEM).

Related coefficients
Related coefficients are tau-equivalent reliability ($$\rho_T$$; traditionally known as "Cronbach's $$\alpha$$") and congeneric reliability ($$\rho_{C}$$; also known as composite reliability) which can be used to evaluate the reliability of tau-equivalent and congeneric measurement models, respectively.