Wikipedia talk:WikiProject Mathematics/equivlist

The parameters and variables of factor analysis can be given a geometrical interpretation. The data ($$z_{ai}$$), the factors ($$F_{pi}$$) and the errors ($$\varepsilon_{ai}$$) can be viewed as vectors in an $$N_i$$-dimensional Euclidean space (sample space), represented as $$\mathbf{z}_a$$, $$\mathbf{F}_p$$ and $$\boldsymbol{\varepsilon}_a$$ respectively. Since the data is standardized, the data vectors are of unit length ($$\mathbf{z}_a\cdot\mathbf{z}_a=1$$). The factor vectors define an $$N_p$$-dimensional linear subspace (i.e. a hyperplane) in this space, upon which the data vectors are projected orthogonally. This follows from the model equation
 * $$\mathbf{z}_a=\sum_p \ell_{ap} \mathbf{F}_p+\boldsymbol{\varepsilon}_a$$

and the independence of the factors and the errors: $$\mathbf{F}_p\cdot\boldsymbol{\varepsilon}_a=0$$. In the above example, the hyperplane is just a 2-dimensional plane defined by the two factor vectors. The projection of the data vectors onto the hyperplane is given by
 * $$\hat{\mathbf{z}}_a=\sum_p \ell_{ap}\mathbf{F}_p$$

and the errors are vectors from that projected point to the data point and are perpendicular to the hyperplane. The goal of factor analysis is to find a hyperplane which is a "best fit" to the data in some sense. Any set of $$N_p$$ factor vectors which lie in the hyperplane and are independent will serve to define the hyperplane, so we are free to specify them as both orthogonal and normal ($$\mathbf{F}_p\cdot \mathbf{F}_q=\delta_{pq}$$) with no loss of generality. After a suitable set of factors are found, they may also be arbitrarily rotated within the hyperplane, so that any such rotation of the factor vectors will define the same hyperplane, and also be a solution. As a result, in the above example, in which the fitting hyperplane is two dimensional, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence, or whether the factors are linear combinations of both, without an outside argument.

The data vectors $$\mathbf{z}_a$$ have unit length. The correlation matrix for the data is given by $$r_{ab}=\mathbf{z}_a\cdot\mathbf{z}_b$$. The correlation matrix can be geometrically interpreted as the cosine of the angle between the two data vectors $$\mathbf{z}_a$$ and $$\mathbf{z}_b$$. The diagonal elements will clearly be 1's and the off diagonal elements will have absolute values less than or equal to unity. The "reduced correlation matrix" is defined as
 * $$\hat{r}_{ab}=\hat{\mathbf{z}}_a\cdot\hat{\mathbf{z}}_b$$.

The goal of factor analysis is to choose the fitting hyperplane such that the reduced correlation matrix reproduces the correlation matrix as nearly as possible, except for the diagonal elements of the correlation matrix which are known to have unit value. In other words, the goal is to reproduce as accurately as possible the cross-correlations in the data. Specifically, for the fitting hyperplane, the mean square error in the off-diagonal components
 * $$\varepsilon^2=\sum_{a,b\ne a} \left(r_{ab}-\hat{r}_{ab}\right)^2$$

is to be minimized, and this is accomplished by minimizing it with respect to a set of orthonormal factor vectors. It can be seen that

r_{ab}-\hat{r}_{ab}= \boldsymbol{\varepsilon}_a\cdot\boldsymbol{\varepsilon}_b $$

The term on the right is just the covariance of the errors. In the model, the error covariance is stated to be a diagonal matrix and so the above minimization problem will in fact yield a "best fit" to the model: It will yield a sample estimate of the error covariance which has its off-diagonal components minimized in the mean square sense. It can be seen that since the $$\hat{z}_a$$ are orthogonal projections of the data vectors, their length will be less than or equal to the length of the projected data vector, which is unity. The square of these lengths are just the diagonal elements of the reduced correlation matrix. These diagonal elements of the reduced correlation matrix are known as "communalities":



h_a^2=\hat{\mathbf{z}}_a\cdot\hat{\mathbf{z}}_a = \sum_p \ell_{ap}\ell_{ap} $$

Large values of the commmunalities will indicate that the fitting hyperplane is rather accurately reproducing the correlation matrix. The optimization problem stated above is intractable without high-speed computation. Before the advent of high speed computers, considerable effort was made to arrive at approximate solutions to the problem, particularly in estimating the communalities by other means, which then simplifies the problem considerably. With the advent of high-speed computers, the minimization problem can be solved quickly amd directly, and the communalities are calculated in the process, rather than being needed beforehand. The MinRes algorithm is particularly suited to this problem, but is hardly the only means of finding an exact solution.