Get AI summaries of any video or article — Sign up free
8. SEM | SPSS AMOS - Confirmatory Factor Analysis (CFA): Measurement Model and Analyzing AMOS Output thumbnail

8. SEM | SPSS AMOS - Confirmatory Factor Analysis (CFA): Measurement Model and Analyzing AMOS Output

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Build CFA in AMOS by assigning each observed indicator to its intended latent construct (servant leadership: SL1–SL7; financial performance: FP1–FP5).

Briefing

Confirmatory Factor Analysis (CFA) in IBM SPSS AMOS is built in two stages: first specify a measurement model for each latent construct, then examine AMOS output to judge whether the indicators measure the intended concepts well. In this example, servant leadership is treated as a latent variable with seven observed indicators (SL1–SL7), while financial performance is another latent variable with five indicators (FP1–FP5). Both latent variables are allowed to covary, and the model is run with standardized estimates, squared multiple correlations, modification indices, and correlation outputs enabled.

The transcript walks through the AMOS workflow: load the dataset, place the latent variables on the canvas, drag each indicator into its corresponding measurement box, rename latent constructs (SL for servant leadership and FP for financial performance), and define error terms for the observed variables. Covariances between latent variables are added using double-headed arrows, after which the analysis is configured under analysis properties and the model is executed. Once computation completes successfully, the focus shifts to interpreting the AMOS diagram and text output.

Reading the output starts with understanding what numbers correspond to. Values printed on the arrows represent standardized regression weights—i.e., factor loadings—while the value in the middle of a double-headed arrow represents the correlation estimate between latent variables. The squared multiple correlation (SMC) is described as the square of the relevant standardized loading, and it indicates how much variance in an indicator is explained by its latent factor.

The output tree is then broken down into key sections. “Notes for the model” reports chi-square and degrees of freedom, while “Notes for the group” includes sample size and whether the model is recursive. A “Variable summary” lists which elements are treated as independent (latent variables and error terms) versus dependent (observed indicators). The “Parameter summary” and especially the “Estimates” table provide unstandardized regression weights, standard errors, critical ratios (t values), and p values; standardized regression weights are treated as factor loadings. The transcript highlights a practical rule of thumb: most standardized loadings should be above about 0.70, and here two indicators (SL1 and SL7) fall short.

Beyond loadings, the “Estimates” table also includes covariance/correlation information, variance, and squared multiple correlations. Modification indices are flagged as a major tool when model fit is poor: they quantify how much the chi-square statistic would drop if a specific additional covariance were freed (for example, between error terms like e11 and e12). However, the transcript warns against using modification indices to add inappropriate paths in CFA—specifically, adding covariances that would improperly link indicators or regressions in ways that contradict the measurement model structure.

Finally, model fit is assessed using common covariance-based SEM criteria. The transcript lists typical acceptance thresholds: chi-square should be insignificant, RMSEA should be below 0.08, and fit indices such as CFI, TLI, NFI, and GFI/AGFI should exceed 0.90 (with the ratio of chi-square to degrees of freedom—CMIN/DF—typically below 3). In this run, fit statistics fall outside acceptable ranges, signaling that the indicators are not adequately representing the intended constructs. The transcript closes by noting that the next step—covered in later sessions—is using modification indices and other strategies to improve measurement quality and overall model fit.

Cornell Notes

The CFA workflow in AMOS starts by specifying measurement models for each latent construct: servant leadership (7 indicators) and financial performance (5 indicators). After loading data and placing indicators under their latent variables, the model adds error terms and allows the two latent variables to covary, then runs the analysis with standardized estimates and modification indices. Output interpretation centers on factor loadings (standardized regression weights on arrows), latent correlation (value on double-headed arrows), and squared multiple correlations (variance explained by the factor). The transcript emphasizes that most loadings should be above ~0.70 and that poor fit is judged using chi-square, RMSEA, CFI/TLI/NFI, and CMIN/DF. When fit is weak, modification indices can suggest freeing specific error covariances, but inappropriate paths must be avoided.

How does AMOS output distinguish factor loadings, latent correlations, and explained variance?

Standardized regression weights printed on single-headed arrows are treated as factor loadings—how well each indicator represents its latent construct. The number shown in the middle of a double-headed arrow is the correlation estimate between the latent variables (here, servant leadership and financial performance). Squared multiple correlation (SMC) is described as the square of the standardized loading; it indicates how much variance in each observed indicator is explained by its latent factor.

What does the “Estimates” table provide, and which columns matter most for CFA measurement quality?

The estimates table includes unstandardized regression weights, standard errors, critical ratios (t values), and p values (with significance often marked by stars when p < .001). It also includes standardized regression weights, which in CFA are the factor loadings. The transcript notes that standard errors and critical ratios are important for reporting, and it uses a rule of thumb that standardized loadings should generally exceed about 0.70; in the example, SL1 and SL7 do not meet that threshold.

Why are modification indices useful in CFA, and what do they quantify?

Modification indices estimate how much the chi-square statistic would decrease if a particular parameter were freed—commonly a covariance between error terms. The transcript gives an example where adding a covariance between error terms (e11 and e12) produces a large chi-square reduction, implying the model fit could improve if that specific relationship is allowed. It also reports “par change,” indicating how much the parameter estimate would change.

What modification-index guidance should be followed to avoid invalid CFA changes?

The transcript warns that modification indices for regression weights/paths are inappropriate for CFA measurement models. It stresses that the relationship from an unobserved construct to its indicator should not be altered by freeing paths that effectively rewire the measurement structure—for instance, linking financial performance directly to financial performance indicators in a way that violates the intended model logic.

Which fit indices are used to judge whether the measurement model is acceptable, and what thresholds are cited?

The transcript focuses on covariance-based SEM fit criteria: chi-square should be insignificant; RMSEA should be less than 0.08; and indices like GFI/AGFI, CFI, TLI, and NFI should be above 0.90. It also uses CMIN/DF (chi-square divided by degrees of freedom), described as semen/ratio, with an acceptance guideline of less than 3. In the example run, these values are outside acceptable ranges, indicating the indicators do not adequately measure the constructs.

What does the “Variable summary” reveal about which elements are treated as independent vs dependent?

In the variable summary, latent variables (exogenous constructs like servant leadership and financial performance) and error terms are listed as independent. The observed indicators are listed as dependent/endogenous variables. This matches CFA’s structure: latent factors predict observed items, while errors capture measurement noise.

Review Questions

  1. In AMOS CFA output, how would you use standardized regression weights to decide whether an indicator is a good measure of its latent construct?
  2. What does a modification index value represent, and why might freeing an error covariance improve model fit without breaking CFA assumptions?
  3. Which combination of fit indices (chi-square, RMSEA, CFI/TLI/NFI, and CMIN/DF) would you check first when evaluating whether a CFA measurement model is acceptable?

Key Points

  1. 1

    Build CFA in AMOS by assigning each observed indicator to its intended latent construct (servant leadership: SL1–SL7; financial performance: FP1–FP5).

  2. 2

    Add error terms for observed indicators and specify the covariance between latent variables before running the model.

  3. 3

    Interpret standardized regression weights on arrows as factor loadings; the double-headed arrow value is the latent correlation estimate.

  4. 4

    Use squared multiple correlations to understand how much variance each indicator shares with its latent factor (SMC is tied to the squared loading).

  5. 5

    Use critical ratios (t values) and p values from the estimates table to judge whether loadings are statistically meaningful.

  6. 6

    When fit is poor, consult modification indices to identify potentially missing covariances (often between error terms), but avoid freeing inappropriate regression/measurement paths.

Highlights

Factor loadings in AMOS CFA are the standardized regression weights on the arrows; the transcript treats them as the key evidence that indicators represent their latent constructs.
Modification indices quantify how much chi-square would drop if a specific covariance were freed—illustrated with a large suggested error covariance between e11 and e12.
The example run fails common fit thresholds (RMSEA, CFI/TLI/NFI, and CMIN/DF), signaling that the indicators are not adequately measuring servant leadership and financial performance.

Mentioned