8. SEM | SPSS AMOS - Confirmatory Factor Analysis (CFA): Measurement Model and Analyzing AMOS Output
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Build CFA in AMOS by assigning each observed indicator to its intended latent construct (servant leadership: SL1–SL7; financial performance: FP1–FP5).
Briefing
Confirmatory Factor Analysis (CFA) in IBM SPSS AMOS is built in two stages: first specify a measurement model for each latent construct, then examine AMOS output to judge whether the indicators measure the intended concepts well. In this example, servant leadership is treated as a latent variable with seven observed indicators (SL1–SL7), while financial performance is another latent variable with five indicators (FP1–FP5). Both latent variables are allowed to covary, and the model is run with standardized estimates, squared multiple correlations, modification indices, and correlation outputs enabled.
The transcript walks through the AMOS workflow: load the dataset, place the latent variables on the canvas, drag each indicator into its corresponding measurement box, rename latent constructs (SL for servant leadership and FP for financial performance), and define error terms for the observed variables. Covariances between latent variables are added using double-headed arrows, after which the analysis is configured under analysis properties and the model is executed. Once computation completes successfully, the focus shifts to interpreting the AMOS diagram and text output.
Reading the output starts with understanding what numbers correspond to. Values printed on the arrows represent standardized regression weights—i.e., factor loadings—while the value in the middle of a double-headed arrow represents the correlation estimate between latent variables. The squared multiple correlation (SMC) is described as the square of the relevant standardized loading, and it indicates how much variance in an indicator is explained by its latent factor.
The output tree is then broken down into key sections. “Notes for the model” reports chi-square and degrees of freedom, while “Notes for the group” includes sample size and whether the model is recursive. A “Variable summary” lists which elements are treated as independent (latent variables and error terms) versus dependent (observed indicators). The “Parameter summary” and especially the “Estimates” table provide unstandardized regression weights, standard errors, critical ratios (t values), and p values; standardized regression weights are treated as factor loadings. The transcript highlights a practical rule of thumb: most standardized loadings should be above about 0.70, and here two indicators (SL1 and SL7) fall short.
Beyond loadings, the “Estimates” table also includes covariance/correlation information, variance, and squared multiple correlations. Modification indices are flagged as a major tool when model fit is poor: they quantify how much the chi-square statistic would drop if a specific additional covariance were freed (for example, between error terms like e11 and e12). However, the transcript warns against using modification indices to add inappropriate paths in CFA—specifically, adding covariances that would improperly link indicators or regressions in ways that contradict the measurement model structure.
Finally, model fit is assessed using common covariance-based SEM criteria. The transcript lists typical acceptance thresholds: chi-square should be insignificant, RMSEA should be below 0.08, and fit indices such as CFI, TLI, NFI, and GFI/AGFI should exceed 0.90 (with the ratio of chi-square to degrees of freedom—CMIN/DF—typically below 3). In this run, fit statistics fall outside acceptable ranges, signaling that the indicators are not adequately representing the intended constructs. The transcript closes by noting that the next step—covered in later sessions—is using modification indices and other strategies to improve measurement quality and overall model fit.
Cornell Notes
The CFA workflow in AMOS starts by specifying measurement models for each latent construct: servant leadership (7 indicators) and financial performance (5 indicators). After loading data and placing indicators under their latent variables, the model adds error terms and allows the two latent variables to covary, then runs the analysis with standardized estimates and modification indices. Output interpretation centers on factor loadings (standardized regression weights on arrows), latent correlation (value on double-headed arrows), and squared multiple correlations (variance explained by the factor). The transcript emphasizes that most loadings should be above ~0.70 and that poor fit is judged using chi-square, RMSEA, CFI/TLI/NFI, and CMIN/DF. When fit is weak, modification indices can suggest freeing specific error covariances, but inappropriate paths must be avoided.
How does AMOS output distinguish factor loadings, latent correlations, and explained variance?
What does the “Estimates” table provide, and which columns matter most for CFA measurement quality?
Why are modification indices useful in CFA, and what do they quantify?
What modification-index guidance should be followed to avoid invalid CFA changes?
Which fit indices are used to judge whether the measurement model is acceptable, and what thresholds are cited?
What does the “Variable summary” reveal about which elements are treated as independent vs dependent?
Review Questions
- In AMOS CFA output, how would you use standardized regression weights to decide whether an indicator is a good measure of its latent construct?
- What does a modification index value represent, and why might freeing an error covariance improve model fit without breaking CFA assumptions?
- Which combination of fit indices (chi-square, RMSEA, CFI/TLI/NFI, and CMIN/DF) would you check first when evaluating whether a CFA measurement model is acceptable?
Key Points
- 1
Build CFA in AMOS by assigning each observed indicator to its intended latent construct (servant leadership: SL1–SL7; financial performance: FP1–FP5).
- 2
Add error terms for observed indicators and specify the covariance between latent variables before running the model.
- 3
Interpret standardized regression weights on arrows as factor loadings; the double-headed arrow value is the latent correlation estimate.
- 4
Use squared multiple correlations to understand how much variance each indicator shares with its latent factor (SMC is tied to the squared loading).
- 5
Use critical ratios (t values) and p values from the estimates table to judge whether loadings are statistically meaningful.
- 6
When fit is poor, consult modification indices to identify potentially missing covariances (often between error terms), but avoid freeing inappropriate regression/measurement paths.