Jump to content

User:Sjlweech/sandbox: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Sjlweech (talk | contribs)
No edit summary
Citation bot (talk | contribs)
Alter: pages, issue. Add: pages, date. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | #UCB_CommandLine
 
(16 intermediate revisions by 2 users not shown)
Line 2: Line 2:
<!-- EDIT BELOW THIS LINE -->
<!-- EDIT BELOW THIS LINE -->


'''Factor analysis''' is a [[statistics|statistical]] method used to describe [[variance|variability]] among observed, correlated [[Variable (mathematics)|variables]] in terms of a potentially lower number of unobserved, uncorrelated variables called '''factors'''. In other words, it is possible, for example, that variations in three or four observed variables mainly reflect the variations in fewer such unobserved variables. Factor analysis searches for such joint variations in response to unobserved [[latent variable]]s. The observed variables are modeled as [[linear combination]]s of the potential factors, plus "[[errors and residuals in statistics|error]]" terms. The information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Computationally this technique is equivalent to [[low rank approximation]] of the matrix of observed variables. Factor analysis originated in [[psychometrics]], and is used in behavioral sciences, [[social sciences]], [[marketing]], [[product management]], [[operations research]], and other applied sciences that deal with large quantities of [[data]].
'''Factor analysis''' is a [[statistics|statistical]] method used to describe [[variance|variability]] among observed, correlated [[Variable (mathematics)|variables]] in terms of a potentially lower number of unobserved variables called '''factors'''. In other words, it is possible, for example, that variations in three or four observed variables mainly reflect the variations in fewer such unobserved variables. Factor analysis searches for such joint variations in response to unobserved [[latent variable]]s. The observed variables are modeled as [[linear combination]]s of the potential factors, plus "[[errors and residuals in statistics|error]]" terms. The information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Computationally this technique is equivalent to [[low rank approximation]] of the matrix of observed variables. Factor analysis originated in [[psychometrics]], and is used in behavioral sciences, [[social sciences]], [[marketing]], [[product management]], [[operations research]], and other applied sciences that deal with large quantities of [[data]].


Factor analysis is related to [[principal component analysis]] (PCA), but the two are not identical. [[Latent variable model]]s, including factor analysis, use regression modelling techniques to test hypotheses producing error terms, while PCA is a descriptive statistical technique <ref name="Bartholomew2008">Bartholomew, D. J., Steele, F., Galbraith, J., & Moustaki, I. (2008). Analysis of Multivariate Social Science Data (2 ed.). New York: Chapman & Hall/Crc.</ref>. There has been significant controversy in the field over the inequivocality of the two techniques (see [[#Factor Analysis versus Principal Components Analysis|Factor Analysis versus Principal Components Analysis]]).
Factor analysis is related to [[principal component analysis]] (PCA), but the two are not identical. [[Latent variable model]]s, including factor analysis, use regression modelling techniques to test hypotheses producing error terms, while PCA is a descriptive statistical technique <ref name="Bartholomew2008">Bartholomew, D. J., Steele, F., Galbraith, J., & Moustaki, I. (2008). Analysis of Multivariate Social Science Data (2 ed.). New York: Chapman & Hall/Crc.</ref>. There has been significant controversy in the field over the inequivocality of the two techniques (see [[#Factor Analysis versus Principal Components Analysis|Factor Analysis versus Principal Components Analysis]]).
Line 183: Line 183:


===Disadvantages===
===Disadvantages===
* "...each orientation is equally acceptable mathematically. But different factorial theories proved to differ as much in terms of the orientations of factorial axes for a given solution as in terms of anything else, so that model fitting did not prove to be useful in distinguishing among theories." (Sternberg, 1977). This means all rotations represent different underlying processes, but all rotations are equally valid outcomes of standard factor analysis optimization. Therefore, it is impossible to pick the proper rotation using factor analysis alone.
* "...each orientation is equally acceptable mathematically. But different factorial theories proved to differ as much in terms of the orientations of factorial axes for a given solution as in terms of anything else, so that model fitting did not prove to be useful in distinguishing among theories." (Sternberg, 1977<ref name=Sternberg>{{cite book|last=Sternberg|first=R.J.|title=Metaphors of mind: Conceptions of the nature of intelligence|year=1977|publisher=New York: Cambridge|pages=85–111}}</ref>). This means all rotations represent different underlying processes, but all rotations are equally valid outcomes of standard factor analysis optimization. Therefore, it is impossible to pick the proper rotation using factor analysis alone.
* Factor analysis can be only as good as the data allows. In psychology, where researchers often have to rely on less valid and reliable measures such as self-reports, this can be problematic.
* Factor analysis can be only as good as the data allows. In psychology, where researchers often have to rely on less valid and reliable measures such as self-reports, this can be problematic.
* Interpreting factor analysis is based on using a "heuristic", which is a solution that is "convenient even if not absolutely true".<ref>Richard B. Darlington (2004) {{cite web|title=Factor Analysis |accessdate=July 22, 2004 |url=http://comp9.psych.cornell.edu/Darlington/factor.htm}}
* Interpreting factor analysis is based on using a "heuristic", which is a solution that is "convenient even if not absolutely true".<ref>Richard B. Darlington (2004) {{cite web|title=Factor Analysis |accessdate=July 22, 2004 |url=http://comp9.psych.cornell.edu/Darlington/factor.htm}}
Line 190: Line 190:
==Factor Analysis versus Principal Components Analysis==
==Factor Analysis versus Principal Components Analysis==
{{details|Principal component analysis}}
{{details|Principal component analysis}}
There has been controversy over the synonymity with which factor analysis and [[PCA|Principal component analysis]] are treated in statistics (e.g. Fabrigar et al., 1999<ref name=Fabrigar>{{cite web|last=Fabrigar et al.|title=Evaluating the use of exploratory factor analysis in psychological research.|url=http://www.statpower.net/Content/312/Handout/Fabrigar1999.pdf|publisher=Psychological Methods}}</ref>; Suhr, 2009<ref name=Suhr>{{cite web|last=Suhr|first=Diane|title=Principal component analysis vs. exploratory factor analysis|url=http://www2.sas.com/proceedings/sugi30/203-30.pdf|publisher=SUGI 30 Proceedings|accessdate=05 April 2012}}</ref> ). In factor analysis, the researcher makes the assumption that an underlying causal model exists, whereas PCA is simply a variable reduction technique. <ref name=Sas>{{cite web|title=Principal Components Analysis|url=http://support.sas.com/publishing/pubcat/chaps/55129.pdf|work=SAS Support Textbook|author=SAS Statistics}}</ref> Researchers have argued that the distinctions between the two techniques merit considerations in employing one technique over the other.
There has been controversy over the synonymity with which factor analysis and [[PCA|Principal component analysis]] are treated in statistics (e.g. Fabrigar et al., 1999<ref name=Fabrigar>{{cite web|last=Fabrigar et al.|title=Evaluating the use of exploratory factor analysis in psychological research.|year=1999|url=http://www.statpower.net/Content/312/Handout/Fabrigar1999.pdf|publisher=Psychological Methods}}</ref>; Suhr, 2009<ref name=Suhr>{{cite web|last=Suhr|first=Diane|year=2009|title=Principal component analysis vs. exploratory factor analysis|url=http://www2.sas.com/proceedings/sugi30/203-30.pdf|publisher=SUGI 30 Proceedings|accessdate=05 April 2012}}</ref> ). In factor analysis, the researcher makes the assumption that an underlying causal model exists, whereas PCA is simply a variable reduction technique. <ref name=Sas>{{cite web|title=Principal Components Analysis|url=http://support.sas.com/publishing/pubcat/chaps/55129.pdf|work=SAS Support Textbook|author=SAS Statistics}}</ref> Researchers have argued that the distinctions between the two techniques merit considerations in employing one technique over the other.


===Arguments for/against PCA over FA===
===Arguments for/against PCA over FA===
Line 197: Line 197:
# It is sometimes suggested that principal components analysis is computationally quicker and requires fewer resources than factor analysis. Fabrigar et al. suggest that this issue is made redundant by the vast computer resources readily available today.<ref name=Fabrigar />
# It is sometimes suggested that principal components analysis is computationally quicker and requires fewer resources than factor analysis. Fabrigar et al. suggest that this issue is made redundant by the vast computer resources readily available today.<ref name=Fabrigar />
# PCA and factor analysis can produce similar results. This point is also addressed by Fabrigar et al.; in certain cases, whereby the communalities are low (e.g., .40), the two techniques do not produce equivocal results. In fact, Fabrigar et al. argue that in cases where the data correspond to assumptions of the common factor model, PCA does not provide accurate results.<ref name=Fabrigar />
# PCA and factor analysis can produce similar results. This point is also addressed by Fabrigar et al.; in certain cases, whereby the communalities are low (e.g., .40), the two techniques do not produce equivocal results. In fact, Fabrigar et al. argue that in cases where the data correspond to assumptions of the common factor model, PCA does not provide accurate results.<ref name=Fabrigar />
# There are certain cases whereby 'Heywood cases' result in factor analysis. These encompass situations whereby 100% or more of the variance in a measured variable is estimated to be accounted for by the model. Fabrigar et al. suggest that these cases are informative to the researcher, indicating a misspecified model or a violation of the common factor model. The lack of Heywood cases in the PCA approach may mean that such issues pass unnoticed.<ref name=Fabrigar />
# There are certain cases whereby 'Heywood cases' result in factor analysis. These encompass situations whereby 100% or more of the [[variance]] in a measured variable is estimated to be accounted for by the model. Fabrigar et al. suggest that these cases are informative to the researcher, indicating a misspecified model or a violation of the common factor model. The lack of Heywood cases in the PCA approach may mean that such issues pass unnoticed.<ref name=Fabrigar />
# Researchers gain extra information from a PCA approach, such as an individual’s score on a certain component – such information is not yielded from factor analysis. However, as Fabrigar et al. contend, the typical aim of factor analysis – i.e. to determine the factors accounting for the structure of the correlations between measured variables – does not require knowledge of factor scores.<ref name=Fabrigar />
# Researchers gain extra information from a PCA approach, such as an individual’s score on a certain component – such information is not yielded from factor analysis. However, as Fabrigar et al. contend, the typical aim of factor analysis – i.e. to determine the factors accounting for the structure of the [[Correlation and dependence|correlations]] between measured variables – does not require knowledge of factor scores.<ref name=Fabrigar />


===Variance versus covariance===
===Variance versus covariance===
Factor analysis takes into account the random error that is inherent to psychological research measures, whereas PCA fails to do so. This point is exemplified by Brown (2009)<ref name=Brown>{{cite web|last=Brown|first=James Dean|title=Principal components analysis and exploratory factor analysis – Definitions, differences and choices.|url=http://jalt.org/test/PDF/Brown29.pdf|publisher=Shiken: JALT Testing & Evaluation SIG Newsletter|accessdate=16 April 2012}}</ref> , who indicated that, in respect to the correlation matrices involved in the calculations: ''“In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance). That would, therefore, by definition, include only variance that is common among the variables.”'' For this reason, Brown (2009) recommends using factor analysis when theoretical ideas about relationships between variables exist, whereas PCA should be used if the goal of the researcher is to explore patterns in their data.
Factor analysis takes into account the [[random error]] that is inherent to psychological research measures, whereas PCA fails to do so. This point is exemplified by Brown (2009)<ref name=Brown>{{cite web|last=Brown|first=James Dean|title=Principal components analysis and exploratory factor analysis – Definitions, differences and choices.|url=http://jalt.org/test/PDF/Brown29.pdf|publisher=Shiken: JALT Testing & Evaluation SIG Newsletter|accessdate=16 April 2012}}</ref> , who indicated that, in respect to the correlation matrices involved in the calculations:
{{Quotation|"In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance). That would, therefore, by definition, include only variance that is common among the variables."|Brown (2009)|Principal components analysis and exploratory factor analysis – Definitions, differences and choices}}
For this reason, Brown (2009) recommends using factor analysis when theoretical ideas about relationships between variables exist, whereas PCA should be used if the goal of the researcher is to explore patterns in their data.


===Differences in procedure and results===
===Differences in procedure and results===
Line 210: Line 214:
* PCA inserts ones on the diagonals of the correlation matrix; FA adjusts the diagonals of the correlation matrix with the unique factors.<ref name=Suhr />
* PCA inserts ones on the diagonals of the correlation matrix; FA adjusts the diagonals of the correlation matrix with the unique factors.<ref name=Suhr />
* PCA minimizes the sum of squared perpendicular distance to the component axis; FA estimates factors which influence responses on observed variables.<ref name=Suhr />
* PCA minimizes the sum of squared perpendicular distance to the component axis; FA estimates factors which influence responses on observed variables.<ref name=Suhr />
* The component scores in PCA represent a linear combination of the observed variables weighted by eigenvectors; the observed variables in FA are linear combinations of the underling and unique factors.<ref name=Suhr />
* The component scores in PCA represent a linear combination of the observed variables weighted by [[Eigenvalues and eigenvectors|eigenvectors]]; the observed variables in FA are linear combinations of the underling and unique factors.<ref name=Suhr />
* In PCA, the components yielded are uninterpretable, i.e. they do not represent underlying ‘constructs’; in FA, the underlying constructs can be labeled and readily interpreted, given an accurate model specification.<ref name=Suhr />
* In PCA, the components yielded are uninterpretable, i.e. they do not represent underlying ‘constructs’; in FA, the underlying constructs can be labeled and readily interpreted, given an accurate model specification.<ref name=Suhr />


Line 226: Line 230:


=== Analysis ===
=== Analysis ===
The analysis will isolate the underlying factors that explain the data. Factor analysis is an interdependence technique. The complete set of interdependent relationships is examined. There is no specification of dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because the attributes are related. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a ''factor loading''. There are two approaches to factor analysis: "[[principal component analysis]]" (the total [[variance]] in the data is considered); and "common factor analysis" (the common variance is considered).
The analysis will isolate the underlying factors that explain the data. Factor analysis is an interdependence technique. The complete set of interdependent relationships is examined. There is no specification of dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because the attributes are related. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a ''factor loading''.

Note that principal component analysis and common factor analysis differ in terms of their conceptual underpinnings. The factors produced by principal component analysis are conceptualized as being linear combinations of the variables whereas the factors produced by common factor analysis are conceptualized as being latent variables. Computationally, the only difference is that the diagonal of the relationships matrix is replaced with communalities (the variance accounted for by more than one variable) in common factor analysis. This has the result of making the factor scores indeterminate and differ depending on the method of computation. Meanwhile, factor scores produced by principal component analysis are not dependent on the method of computation. Although there have been heated debates over the merits of the two methods, a number of leading statisticians have concluded that in practice there is little difference<ref>Velicer, W. F.; Jackson, D. N. (1990). "Component analysis versus common factor analysis: Some issues in selecting an appropriate procedure". ''Multivariate Behavioral Research'', 25(1), 1-28.</ref> which makes sense since the computations are quite similar despite the differing conceptual bases, especially for datasets where communalities are high and/or there are many variables, reducing the influence of the diagonal of the relationship matrix on the final result.<ref>Gorsuch, R. L. (1983) ''Factor Analysis''. Hillsdale, NJ: Lawrence Erlbaum. ISBN 0-89859-202-X</ref>

The use of principal components in a semantic space can vary somewhat because the components may only "predict" but not "map" to the vector space. This produces a statistical principal component use where the most salient words or themes represent the preferred [[basis (linear algebra)|basis]].


===Advantages===
===Advantages===
Line 260: Line 260:
| journal = Psychometrika
| journal = Psychometrika
| volume = 48
| volume = 48
| issue = 48
| issue = 2
| month = June
| date = June 1983
| year = 1983
| year = 1983
| pages = 223–231
| doi = 10.1007/BF02294017
| doi = 10.1007/BF02294017
}}</ref>
}}</ref>
Line 284: Line 285:
==Further reading==
==Further reading==
*{{citation |year=1973 |author=Child, Dennis |title=The Essentials of Factor Analysis |place=London |publisher=Holt, Rinehart & Winston}}
*{{citation |year=1973 |author=Child, Dennis |title=The Essentials of Factor Analysis |place=London |publisher=Holt, Rinehart & Winston}}
* Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. ''Psychological Methods'', 4(3), 272-299.

==References==
{{More footnotes|date=December 2009}}

* Bryant and Yarnold (1994). "Principal components analysis and exploratory and confirmatory factor analysis". In: Grimm and Yarnold, ''Reading and understanding multivariate analysis''. American Psychological Association Books. ISBN 978-1-55798-273-5
*Sheppard, A. G. (1996). The sequence of factor analysis and cluster analysis: Differences in segmentation and dimensionality through the use of raw and factor scores. ''Tourism Analysis'', 1, 49-57.
*Sternberg, R.J.(1990). The geographic metaphor. In R.J. Sternberg, ''Metaphors of mind: Conceptions of the nature of intelligence'' (pp.&nbsp;85–111). New York: Cambridge.


==External links==
==External links==
Line 301: Line 296:


{{DEFAULTSORT:Factor Analysis}}
{{DEFAULTSORT:Factor Analysis}}
[[Category:Psychometrics]]
[[Category:Multivariate statistics]]
[[Category:Latent variable models]]
[[Category:Market research]]
[[Category:Product management]]
[[Category:Marketing]]
[[Category:Educational psychology]]

[[bg:Факторен анализ]]
[[cs:Faktorová analýza]]
[[da:Faktoranalyse]]
[[de:Faktorenanalyse]]
[[es:Análisis factorial]]
[[fr:Analyse factorielle]]
[[it:Analisi fattoriale]]
[[kk:Факторлар теориясы]]
[[lv:Faktoru analīze]]
[[hu:Faktoranalízis]]
[[nl:Factoranalyse]]
[[ja:因子分析]]
[[pl:Analiza czynnikowa]]
[[pt:Análise fatorial]]
[[ru:Факторный анализ]]
[[sr:Факторска анализа]]
[[su:Analisis faktor]]
[[fi:Faktorianalyysi]]
[[uk:Факторний аналіз]]

Latest revision as of 03:03, 2 August 2023

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. In other words, it is possible, for example, that variations in three or four observed variables mainly reflect the variations in fewer such unobserved variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modeled as linear combinations of the potential factors, plus "error" terms. The information gained about the interdependencies between observed variables can be used later to reduce the set of variables in a dataset. Computationally this technique is equivalent to low rank approximation of the matrix of observed variables. Factor analysis originated in psychometrics, and is used in behavioral sciences, social sciences, marketing, product management, operations research, and other applied sciences that deal with large quantities of data.

Factor analysis is related to principal component analysis (PCA), but the two are not identical. Latent variable models, including factor analysis, use regression modelling techniques to test hypotheses producing error terms, while PCA is a descriptive statistical technique [1]. There has been significant controversy in the field over the inequivocality of the two techniques (see Factor Analysis versus Principal Components Analysis).

Statistical model

[edit]

Definition

[edit]

Suppose we have a set of observable random variables, with means .

Suppose for some unknown constants and unobserved random variables , where and , where , we have

Here, the are independently distributed error terms with zero mean and finite variance, which may not be the same for all . Let , so that we have

In matrix terms, we have

If we have observations, then we will have the dimensions , , and . Each column of and denote values for one particular observation, and matrix does not vary across observations.

Also we will impose the following assumptions on .

  1. and are independent.
  2. (to make sure that the factors are uncorrelated)

Any solution of the above set of equations following the constraints for is defined as the factors, and as the loading matrix.

Suppose . Then note that from the conditions just imposed on , we have

or

or

Note that for any orthogonal matrix if we set and , the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is identical only up to orthogonal transformations.

Example

[edit]

The following example is for expository purposes, and should not be taken as being realistic. Suppose a psychologist proposes a theory that there are two kinds of intelligence, "verbal intelligence" and "mathematical intelligence", neither of which is directly observed. Evidence for the theory is sought in the examination scores from each of 10 different academic fields of 1000 students. If each student is chosen randomly from a large population, then each student's 10 scores are random variables. The psychologist's theory may say that for each of the 10 academic fields, the score averaged over the group of all students who share some common pair of values for verbal and mathematical "intelligences" is some constant times their level of verbal intelligence plus another constant times their level of mathematical intelligence, i.e., it is a linear combination of those two "factors". The numbers for a particular subject, by which the two kinds of intelligence are multiplied to obtain the expected score, are posited by the theory to be the same for all intelligence level pairs, and are called "factor loadings" for this subject. For example, the theory may hold that the average student's aptitude in the field of amphibiology is

{10 × the student's verbal intelligence} + {6 × the student's mathematical intelligence}.

The numbers 10 and 6 are the factor loadings associated with amphibiology. Other academic subjects may have different factor loadings.

Two students having identical degrees of verbal intelligence and identical degrees of mathematical intelligence may have different aptitudes in amphibiology because individual aptitudes differ from average aptitudes. That difference is called the "error" — a statistical term that means the amount by which an individual differs from what is average for his or her levels of intelligence (see errors and residuals in statistics).

The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data.

Mathematical model of the same example

[edit]

In the example above, for i = 1, ..., 1,000 the ith student's scores are

where

  • xk,i is the ith student's score for the kth subject
  • is the mean of the students' scores for the kth subject (assumed to be zero, for simplicity, in the example as described above, which would amount to a simple shift of the scale used)
  • vi is the ith student's "verbal intelligence",
  • mi is the ith student's "mathematical intelligence",
  • are the factor loadings for the kth subject, for j = 1, 2.
  • εk,i is the difference between the ith student's score in the kth subject and the average score in the kth subject of all students whose levels of verbal and mathematical intelligence are the same as those of the ith student,

In matrix notation, we have

where

  • N is 1000 students
  • X is a 10 × 1,000 matrix of observable random variables,
  • μ is a 10 × 1 column vector of unobservable constants (in this case "constants" are quantities not differing from one individual student to the next; and "random variables" are those assigned to individual students; the randomness arises from the random way in which the students are chosen),
  • L is a 10 × 2 matrix of factor loadings (unobservable constants, ten academic topics, each with two intelligence parameters that determine success in that topic),
  • F is a 2 × 1,000 matrix of unobservable random variables (two intelligence parameters for each of 1000 students),
  • ε is a 10 × 1,000 matrix of unobservable random variables.

Observe that by doubling the scale on which "verbal intelligence"—the first component in each column of F—is measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of verbal intelligence is 1. Likewise for mathematical intelligence. Moreover, for similar reasons, no generality is lost by assuming the two factors are uncorrelated with each other. The "errors" ε are taken to be independent of each other. The variances of the "errors" associated with the 10 different subjects are not assumed to be equal.

Note that, since any rotation of a solution is also a solution, this makes interpreting the factors difficult. See disadvantages below. In this particular example, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence without an outside argument.

The values of the loadings L, the averages μ, and the variances of the "errors" ε must be estimated given the observed data X and F (the assumption about the levels of the factors is fixed for a given F).

Practical implementation

[edit]

Type of factor analysis

[edit]

Exploratory factor analysis (EFA) is used to uncover the underlying structure of a relatively large set of variables. The researcher's a priori assumption is that any indicator may be associated with any factor. This is the most common form of factor analysis. There is no prior theory and one uses factor loadings to intuit the factor structure of the data.

Confirmatory factor analysis (CFA) seeks to determine if the number of factors and the loadings of measured (indicator) variables on them conform to what is expected on the basis of pre-established theory. Indicator variables are selected on the basis of prior theory and factor analysis is used to see if they load as predicted on the expected number of factors. The researcher's a priori assumption is that each factor (the number and labels of which may be specified a priori) is associated with a specified subset of indicator variables. A minimum requirement of confirmatory factor analysis is that one hypothesizes beforehand the number of factors in the model, but usually also the researcher will posit expectations about which variables will load on which factors. The researcher seeks to determine, for instance, if measures created to represent a latent variable really belong together.

Types of factoring

[edit]

Principal component analysis (PCA): The most common form of data reduction, PCA is often confused with factor analysis, although it is not, technically, a form of factor analysis (see Factor Analysis versus Principal Components Analysis. PCA seeks a linear combination of variables such that the maximum variance is extracted from the variables. It then removes this variance and seeks a second linear combination which explains the maximum proportion of the remaining variance, and so on. This is called the principal axis method and results in orthogonal (uncorrelated) factors.

Canonical factor analysis, also called Rao's canonical factoring, is a different method of computing the same model as PCA, which uses the principal axis method. Canonical factor analysis seeks factors which have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data.

Common factor analysis, also called principal factor analysis (PFA) or principal axis factoring (PAF), seeks the least number of factors which can account for the common variance (correlation) of a set of variables.

Image factoring: based on the correlation matrix of predicted variables rather than actual variables, where each variable is predicted from the others using multiple regression.

Alpha factoring: based on maximizing the reliability of factors, assuming variables are randomly sampled from a universe of variables. All other methods assume cases to be sampled and variables fixed.

Factor regression model: a combinatorial model of factor model and regression model; or alternatively, it can be viewed as the hybrid factor model,[2] whose factors are partially known.

Terminology

[edit]

Factor loadings: The factor loadings, also called component loadings in PCA, are the correlation coefficients between the variables (rows) and factors (columns). Analogous to Pearson's r, the squared factor loading is the percent of variance in that indicator variable explained by the factor. To get the percent of variance in all the variables accounted for by each factor, add the sum of the squared factor loadings for that factor (column) and divide by the number of variables. (Note the number of variables equals the sum of their variances as the variance of a standardized variable is 1.) This is the same as dividing the factor's eigenvalue by the number of variables.

Interpreting factor loadings: By one rule of thumb in confirmatory factor analysis, loadings should be .7 or higher to confirm that independent variables identified a priori are represented by a particular factor, on the rationale that the .7 level corresponds to about half of the variance in the indicator being explained by the factor. However, the .7 standard is a high one and real-life data may well not meet this criterion, which is why some researchers, particularly for exploratory purposes, will use a lower level such as .4 for the central factor and .25 for other factors call loadings above .6 "high" and those below .4 "low". In any event, factor loadings must be interpreted in the light of theory, not by arbitrary cutoff levels.

In oblique rotation, one gets both a pattern matrix and a structure matrix. The structure matrix is simply the factor loading matrix as in orthogonal rotation, representing the variance in a measured variable explained by a factor on both a unique and common contributions basis. The pattern matrix, in contrast, contains coefficients which just represent unique contributions. The more factors, the lower the pattern coefficients as a rule since there will be more common contributions to variance explained. For oblique rotation, the researcher looks at both the structure and pattern coefficients when attributing a label to a factor.

Communality: The sum of the squared factor loadings for all factors for a given variable (row) is the variance in that variable accounted for by all the factors, and this is called the communality. The communality measures the percent of variance in a given variable explained by all the factors jointly and may be interpreted as the reliability of the indicator.

Spurious solutions: If the communality exceeds 1.0, there is a spurious solution, which may reflect too small a sample or the researcher has too many or too few factors.

Uniqueness of a variable: That is, uniqueness is the variability of a variable minus its communality.

Eigenvalues:/Characteristic roots: The eigenvalue for a given factor measures the variance in all the variables which is accounted for by that factor. The ratio of eigenvalues is the ratio of explanatory importance of the factors with respect to the variables. If a factor has a low eigenvalue, then it is contributing little to the explanation of variances in the variables and may be ignored as redundant with more important factors. Eigenvalues measure the amount of variation in the total sample accounted for by each factor.

Extraction sums of squared loadings: Initial eigenvalues and eigenvalues after extraction (listed by SPSS as "Extraction Sums of Squared Loadings") are the same for PCA extraction, but for other extraction methods, eigenvalues after extraction will be lower than their initial counterparts. SPSS also prints "Rotation Sums of Squared Loadings" and even for PCA, these eigenvalues will differ from initial and extraction eigenvalues, though their total will be the same.

Factor scores (also called component scores in PCA): are the scores of each case (row) on each factor (column). To compute the factor score for a given case for a given factor, one takes the case's standardized score on each variable, multiplies by the corresponding factor loading of the variable for the given factor, and sums these products. Computing factor scores allows one to look for factor outliers. Also, factor scores may be used as variables in subsequent modeling.

Criteria for determining the number of factors

[edit]

Using one or more of the methods below, the researcher determines an appropriate range of solutions to investigate. Methods may not agree. For instance, the Kaiser criterion may suggest five factors and the scree test may suggest two, so the researcher may request 3-, 4-, and 5-factor solutions discuss each in terms of their relation to external data and theory.

Comprehensibility: A purely subjective criterion would be to retain those factors whose meaning is comprehensible to the researcher. This is not recommended [citation needed].

Kaiser criterion: The Kaiser rule is to drop all components with eigenvalues under 1.0 – this being the eigenvalue equal to the information accounted for by an average single item. The Kaiser criterion is the default in SPSS and most statistical software but is not recommended when used as the sole cut-off criterion for estimating the number of factors as it tends to overextract factors.[3]

Variance explained criteria: Some researchers simply use the rule of keeping enough factors to account for 90% (sometimes 80%) of the variation. Where the researcher's goal emphasizes parsimony (explaining variance with as few factors as possible), the criterion could be as low as 50%

Scree plot: The Cattell scree test plots the components as the X axis and the corresponding eigenvalues as the Y-axis. As one moves to the right, toward later components, the eigenvalues drop. When the drop ceases and the curve makes an elbow toward less steep decline, Cattell's scree test says to drop all further components after the one starting the elbow. This rule is sometimes criticised for being amenable to researcher-controlled "fudging". That is, as picking the "elbow" can be subjective because the curve has multiple elbows or is a smooth curve, the researcher may be tempted to set the cut-off at the number of factors desired by his or her research agenda.

Horn's Parallel Analysis (PA): A Monte-Carlo based simulation method that compares the observed eigenvalues with those obtained from uncorrelated normal variables. A factor or component is retained if the associated eigenvalue is bigger than the 95th of the distribution of eigenvalues derived from the random data. PA is one of the most recommendable rules for determining the number of components to retain,[citation needed] but only few programs include this option.[4]

Before dropping a factor below one's cut-off, however, the researcher should check its correlation with the dependent variable. A very small factor can have a large correlation with the dependent variable, in which case it should not be dropped.

Rotation methods

[edit]

The unrotated output maximises the variance accounted for by the first and subsequent factors, and forcing the factors to be orthogonal. This data-compression comes at the cost of having most items load on the early factors, and usually, of having many items load substantially on more than one factor. Rotation serves to make the output more understandable, by seeking so-called "Simple Structure": A pattern of loadings where items load most strongly on one factor, and much more weakly on the other factors. Rotations can be orthogonal or oblique (allowing the factors to correlate).

Varimax rotation is an orthogonal rotation of the factor axes to maximize the variance of the squared loadings of a factor (column) on all the variables (rows) in a factor matrix, which has the effect of differentiating the original variables by extracted factor. Each factor will tend to have either large or small loadings of any particular variable. A varimax solution yields results which make it as easy as possible to identify each variable with a single factor. This is the most common rotation option.

Quartimax rotation is an orthogonal alternative which minimizes the number of factors needed to explain each variable. This type of rotation often generates a general factor on which most variables are loaded to a high or medium degree. Such a factor structure is usually not helpful to the research purpose.

Equimax rotation is a compromise between Varimax and Quartimax criteria.

Direct oblimin rotation is the standard method when one wishes a non-orthogonal (oblique) solution – that is, one in which the factors are allowed to be correlated. This will result in higher eigenvalues but diminished interpretability of the factors. See below.

Promax rotation is an alternative non-orthogonal (oblique) rotation method which is computationally faster than the direct oblimin method and therefore is sometimes used for very large datasets.

Factor analysis in psychometrics

[edit]

History

[edit]

Charles Spearman pioneered the use of factor analysis in the field of psychology and is sometimes credited with the invention of factor analysis. He discovered that school children's scores on a wide variety of seemingly unrelated subjects were positively correlated, which led him to postulate that a general mental ability, or g, underlies and shapes human cognitive performance. His postulate now enjoys broad support in the field of intelligence research, where it is known as the g theory.

Raymond Cattell expanded on Spearman's idea of a two-factor theory of intelligence after performing his own tests and factor analysis. He used a multi-factor theory to explain intelligence. Cattell's theory addressed alternate factors in intellectual development, including motivation and psychology. Cattell also developed several mathematical methods for adjusting psychometric graphs, such as his "scree" test and similarity coefficients. His research led to the development of his theory of fluid and crystallized intelligence, as well as his 16 Personality Factors theory of personality. Cattell was a strong advocate of factor analysis and psychometrics. He believed that all theory should be derived from research, which supports the continued use of empirical observation and objective testing to study human intelligence.

Applications in psychology

[edit]

Factor analysis is used to identify "factors" that explain a variety of results on different tests. For example, intelligence research found that people who get a high score on a test of verbal ability are also good on other tests that require verbal abilities. Researchers explained this by using factor analysis to isolate one factor, often called crystallized intelligence or verbal intelligence, which represents the degree to which someone is able to solve problems involving verbal skills.

Factor analysis in psychology is most often associated with intelligence research. However, it also has been used to find factors in a broad range of domains such as personality, attitudes, beliefs, etc. It is linked to psychometrics, as it can assess the validity of an instrument by finding if the instrument indeed measures the postulated factors.

Advantages

[edit]
  • Reduction of number of variables, by combining two or more variables into a single factor. For example, performance at running, ball throwing, batting, jumping and weight lifting could be combined into a single factor such as general athletic ability. Usually, in an item by people matrix, factors are selected by grouping related items. In the Q factor analysis technique, the matrix is transposed and factors are created by grouping related people: For example, liberals, libertarians, conservatives and socialists, could form separate groups.
  • Identification of groups of inter-related variables, to see how they are related to each other. For example, Carroll used factor analysis to build his Three Stratum Theory. He found that a factor called "broad visual perception" relates to how good an individual is at visual tasks. He also found a "broad auditory perception" factor, relating to auditory task capability. Furthermore, he found a global factor, called "g" or general intelligence, that relates to both "broad visual perception" and "broad auditory perception". This means someone with a high "g" is likely to have both a high "visual perception" capability and a high "auditory perception" capability, and that "g" therefore explains a good part of why someone is good or bad in both of those domains.

Disadvantages

[edit]
  • "...each orientation is equally acceptable mathematically. But different factorial theories proved to differ as much in terms of the orientations of factorial axes for a given solution as in terms of anything else, so that model fitting did not prove to be useful in distinguishing among theories." (Sternberg, 1977[5]). This means all rotations represent different underlying processes, but all rotations are equally valid outcomes of standard factor analysis optimization. Therefore, it is impossible to pick the proper rotation using factor analysis alone.
  • Factor analysis can be only as good as the data allows. In psychology, where researchers often have to rely on less valid and reliable measures such as self-reports, this can be problematic.
  • Interpreting factor analysis is based on using a "heuristic", which is a solution that is "convenient even if not absolutely true".[6] More than one interpretation can be made of the same data factored the same way, and factor analysis cannot identify causality.

Factor Analysis versus Principal Components Analysis

[edit]

There has been controversy over the synonymity with which factor analysis and Principal component analysis are treated in statistics (e.g. Fabrigar et al., 1999[7]; Suhr, 2009[8] ). In factor analysis, the researcher makes the assumption that an underlying causal model exists, whereas PCA is simply a variable reduction technique. [9] Researchers have argued that the distinctions between the two techniques merit considerations in employing one technique over the other.

Arguments for/against PCA over FA

[edit]

Fabrigar et al. (1999)[7] address a number of reasons for which some researchers will argue that principal components analysis should be used as a version of factor analysis:

  1. It is sometimes suggested that principal components analysis is computationally quicker and requires fewer resources than factor analysis. Fabrigar et al. suggest that this issue is made redundant by the vast computer resources readily available today.[7]
  2. PCA and factor analysis can produce similar results. This point is also addressed by Fabrigar et al.; in certain cases, whereby the communalities are low (e.g., .40), the two techniques do not produce equivocal results. In fact, Fabrigar et al. argue that in cases where the data correspond to assumptions of the common factor model, PCA does not provide accurate results.[7]
  3. There are certain cases whereby 'Heywood cases' result in factor analysis. These encompass situations whereby 100% or more of the variance in a measured variable is estimated to be accounted for by the model. Fabrigar et al. suggest that these cases are informative to the researcher, indicating a misspecified model or a violation of the common factor model. The lack of Heywood cases in the PCA approach may mean that such issues pass unnoticed.[7]
  4. Researchers gain extra information from a PCA approach, such as an individual’s score on a certain component – such information is not yielded from factor analysis. However, as Fabrigar et al. contend, the typical aim of factor analysis – i.e. to determine the factors accounting for the structure of the correlations between measured variables – does not require knowledge of factor scores.[7]

Variance versus covariance

[edit]

Factor analysis takes into account the random error that is inherent to psychological research measures, whereas PCA fails to do so. This point is exemplified by Brown (2009)[10] , who indicated that, in respect to the correlation matrices involved in the calculations:

"In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance). That would, therefore, by definition, include only variance that is common among the variables."

— Brown (2009), Principal components analysis and exploratory factor analysis – Definitions, differences and choices

For this reason, Brown (2009) recommends using factor analysis when theoretical ideas about relationships between variables exist, whereas PCA should be used if the goal of the researcher is to explore patterns in their data.

Differences in procedure and results

[edit]

The differences between principal components analysis and factor analysis are further illustrated by Suhr (2009):

  • PCA results in principal components that account for a maximal amount of variance for observed variables; FA account for common variance in the data.[8]
  • PCA inserts ones on the diagonals of the correlation matrix; FA adjusts the diagonals of the correlation matrix with the unique factors.[8]
  • PCA minimizes the sum of squared perpendicular distance to the component axis; FA estimates factors which influence responses on observed variables.[8]
  • The component scores in PCA represent a linear combination of the observed variables weighted by eigenvectors; the observed variables in FA are linear combinations of the underling and unique factors.[8]
  • In PCA, the components yielded are uninterpretable, i.e. they do not represent underlying ‘constructs’; in FA, the underlying constructs can be labeled and readily interpreted, given an accurate model specification.[8]

Factor analysis in marketing

[edit]

The basic steps are:

  • Identify the salient attributes consumers use to evaluate products in this category.
  • Use quantitative marketing research techniques (such as surveys) to collect data from a sample of potential customers concerning their ratings of all the product attributes.
  • Input the data into a statistical program and run the factor analysis procedure. The computer will yield a set of underlying attributes (or factors).
  • Use these factors to construct perceptual maps and other product positioning devices.

Information collection

[edit]

The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product sample or descriptions of product concepts on a range of attributes. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is coded and input into a statistical program such as R, PSPP, SAS, Stata, STATISTICA, JMP and SYSTAT.

Analysis

[edit]

The analysis will isolate the underlying factors that explain the data. Factor analysis is an interdependence technique. The complete set of interdependent relationships is examined. There is no specification of dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because the attributes are related. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a factor loading.

Advantages

[edit]
  • Both objective and subjective attributes can be used provided the subjective attributes can be converted into scores
  • Factor Analysis can be used to identify hidden dimensions or constructs which may not be apparent from direct analysis
  • It is easy and inexpensive to do

Disadvantages

[edit]
  • Usefulness depends on the researchers' ability to collect a sufficient set of product attributes. If important attributes are missed the value of the procedure is reduced.
  • If sets of observed variables are highly similar to each other and distinct from other items, factor analysis will assign a single factor to them. This may make it harder to identify factors that capture more interesting relationships.
  • Naming the factors may require background knowledge or theory because multiple attributes can be highly correlated for no apparent reason.

Factor analysis in physical sciences

[edit]

Factor analysis has also been widely used in physical sciences such as geochemistry, ecology, and hydrochemistry.[11]

In groundwater quality management, it is important to relate the spatial distribution of different chemical parameters to different possible sources, which have different chemical signatures. For example, a sulfide mine is likely to be associated with high levels of acidity, dissolved sulfates and transition metals. These signatures can be identified as factors through R-mode factor analysis, and the location of possible sources can be suggested by contouring the factor scores.[12]

In geochemistry, different factors can correspond to different mineral associations, and thus to mineralisation.[13]

Factor analysis in microarray analysis

[edit]

Factor analysis can be used for summarizing high-density oligonucleotide DNA microarrays data at probe level for Affymetrix GeneChips. In this case, the latent variable corresponds to the RNA concentration in a sample.[14]

Implementation

[edit]

Factor analysis has been implemented in several statistical analysis programs since the 1980s: SAS, BMDP and SPSS.[15] It is also implemented in the R programming language (with the factanal function) and in OpenOpt. Rotations are implemented in the GPArotation R package.

See also

[edit]

References

[edit]
  1. ^ Bartholomew, D. J., Steele, F., Galbraith, J., & Moustaki, I. (2008). Analysis of Multivariate Social Science Data (2 ed.). New York: Chapman & Hall/Crc.
  2. ^ Meng, J. (2011). "Uncover cooperative gene regulations by microRNAs and transcription factors in glioblastoma using a nonnegative hybrid factor model". International Conference on Acoustics, Speech and Signal Processing.
  3. ^ Bandalos, D.L. & Boehm-Kaufman, M.R. (2009). Four common misconceptions in exploratory factor analysis In Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences. Lance, Charles E. (Ed.); Vandenberg, Robert J. (Ed.). New York: Routledge. pp. 61–87.{{cite book}}: CS1 maint: multiple names: authors list (link)
  4. ^ * Ledesma, R.D. and Valero-Mora, P. (2007). "Determining the Number of Factors to Retain in EFA: An easy-to-use computer program for carrying out Parallel Analysis". Practical Assessment Research & Evaluation, 12(2), 1-11
  5. ^ Sternberg, R.J. (1977). Metaphors of mind: Conceptions of the nature of intelligence. New York: Cambridge. pp. 85–111.
  6. ^ Richard B. Darlington (2004) "Factor Analysis". Retrieved July 22, 2004.
  7. ^ a b c d e f Fabrigar; et al. (1999). "Evaluating the use of exploratory factor analysis in psychological research" (PDF). Psychological Methods. {{cite web}}: Explicit use of et al. in: |last= (help)
  8. ^ a b c d e f Suhr, Diane (2009). "Principal component analysis vs. exploratory factor analysis" (PDF). SUGI 30 Proceedings. Retrieved 05 April 2012. {{cite web}}: Check date values in: |accessdate= (help)
  9. ^ SAS Statistics. "Principal Components Analysis" (PDF). SAS Support Textbook.
  10. ^ Brown, James Dean. "Principal components analysis and exploratory factor analysis – Definitions, differences and choices" (PDF). Shiken: JALT Testing & Evaluation SIG Newsletter. Retrieved 16 April 2012.
  11. ^ Subbarao, C.; Subbarao, N.V.; Chandu, S.N. (1995) "Characterisation of groundwater contamination using factor analysis". Environmental Geology, 28, 175–180
  12. ^ Love, D.; Hallbauer, D.K.; Amos, A.; Hranova, R.K. (2004) "Factor analysis as a tool in groundwater quality management: two southern African case studies". Physics and Chemistry of the Earth, 29, 1135-1143. doi:10.1016/j.pce.2004.09.027
  13. ^ Barton, E.S.; Hallbauer, D.K. (1996) "Trace-element and U---Pb isotope compositions of pyrite types in the Proterozoic Black Reef, Transvaal Sequence, South Africa: Implications on genesis and age". Chemical Geology, 133, 173-199. doi:10.1016/S0009-2541(96)00075-7
  14. ^ Sepp Hochreiter, Djork-Arné Clevert, and Klaus Obermayer, 2006. A new summarization method for affymetrix probe level data. Bioinformatics, 22(8), 943-949. [1]
  15. ^ Robert MacCallum (June 1983). "A comparison of factor analysis programs in SPSS, BMDP, and SAS". Psychometrika. 48 (2): 223–231. doi:10.1007/BF02294017.{{cite journal}}: CS1 maint: date and year (link)

Further reading

[edit]
  • Child, Dennis (1973), The Essentials of Factor Analysis, London: Holt, Rinehart & Winston
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299.
[edit]