Get AI summaries of any video or article — Sign up free
43. SPSS AMOS - How to Improve Model Fit using CB-SEM (Confirmatory Factor Analysis) thumbnail

43. SPSS AMOS - How to Improve Model Fit using CB-SEM (Confirmatory Factor Analysis)

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start by cleaning the dataset: correct inconsistent values, assess missingness, and apply imputation before running CFA.

Briefing

Improving a weak confirmatory factor analysis (CFA) model fit in AMOS hinges on a disciplined loop: clean the data, then adjust the measurement model using only defensible changes, and finally re-check fit indices after each modification. The workflow starts with data quality—screening for incorrect or inconsistent values, assessing missingness, and applying data imputation—because poor data can masquerade as model problems.

Once the dataset is ready, the process shifts to AMOS model diagnostics. The first pass checks whether the measurement model is structurally sound: low factor loadings (with a practical cutoff around standardized regression weights below 0.5) and problematic residuals or modification indices. If fit is poor—often signaled by a significant chi-square and weak comparative/absolute fit statistics such as CMIN above 5, GFI/AGFI below .90, CFI/TLI below .90, and RMSEA far above .08—the modeler then targets specific, localized issues rather than making broad changes.

The transcript’s core method is iterative model refinement using modification indices and residual diagnostics. Large modification indices (the example uses a threshold around 4, with the option to raise it) guide where correlated error terms might be added. In the walkthrough, adding a covariance between error terms E11 and E12 produces only limited improvement, but subsequent targeted covariances—such as between E10 and E11 (when allowed), E4 and E5, and E1 and E2 (and E1 and E5 when feasible)—gradually push fit closer to acceptable ranges. Even when fit indices improve, the transcript stresses that changes should be constrained: covariances are drawn only between error terms of indicators belonging to the same construct.

After modification indices are exhausted, the next diagnostic layer is standardized residual covariances. Values exceeding about 2 flag indicators that still misbehave. In the example, the indicator sl3 within “servant leadership” is identified as problematic and removed, which improves several fit measures. A second pass checks standardized regression weights again; a low-loading indicator is removed if needed. However, removing an indicator can break the model’s reference scaling, which is why the transcript highlights a critical fix: after deleting an indicator, the remaining factor must be re-referenced by fixing a parameter to 1 (so the model has a stable metric).

The final stage compares fit indices holistically. After the last adjustments, the model fit becomes “very good” on most metrics—close to the desired threshold for chi-square/CMIN and improved SRMR after running the standardized RMR plugin (the example reports SRMR around 0.3, described as very good). RMSEA remains poor in the example, but with SRMR and other fit indicators acceptable, the model is treated as having a reasonable overall fit for reporting. The takeaway is that model fit improvement in AMOS is less about chasing a single statistic and more about a controlled sequence of data checks, targeted error covariances, and indicator removal guided by residuals—while maintaining proper factor scaling.

Cornell Notes

The transcript lays out a step-by-step AMOS (CB-SEM/CFA) process for improving model fit when initial fit indices are weak. It begins with data preparation: remove inconsistent values, handle missing data via imputation, and then run the CFA with standardized estimates, modification indices, and residual moments enabled. Model refinement proceeds iteratively: check standardized regression weights (e.g., below 0.5), add only same-construct error covariances suggested by large modification indices, and then inspect standardized residual covariances for values above about 2. If a problematic indicator is removed, factor scaling must be restored by fixing a parameter to 1. The final decision weighs multiple fit measures, with SRMR (via a plugin) and other indices used to judge whether the model is reasonably acceptable even if RMSEA stays high.

Why does the workflow start with data checks before touching the AMOS model?

Because inconsistent values and missing-data handling can create apparent misfit that has nothing to do with the measurement structure. The process calls for screening for incorrect/inconsistent entries, assessing missing values, and performing data imputation before running CFA. That way, later changes—like adding error covariances or deleting indicators—address model-specific issues rather than data artifacts.

What are the first diagnostic signals of poor CFA fit in the AMOS output?

The transcript uses a set of fit thresholds to judge whether the model needs improvement: CMIN should be less than 5 (it’s reported as greater than 5), GFI/AGFI should be at least .90 (reported below .90), CFI and TLI should be at least .90 (reported below .90), and RMSEA should be no higher than about .08 (reported “way over .08”). A significant chi-square is also noted, though the example warns that large sample size can make chi-square significant even when the model is workable.

How does the refinement process decide what to change first: factor loadings, modification indices, or residual covariances?

It checks standardized regression weights first (anything below about 0.5 is a candidate for deletion). Then it uses modification indices to add covariances between error terms—only when both indicators belong to the same construct—starting with large values (example: E11 and E12 with an index around 74). After those changes, it inspects standardized residual covariances; indicators tied to residual covariance values above about 2 are removed if they remain problematic.

What constraint governs adding covariances based on modification indices?

Covariances are drawn only between error terms of indicators within the same construct. The transcript explicitly rejects some suggested covariances when indicators belong to different constructs (e.g., cases where one error term is tied to SL and the other to FP). This keeps the adjustments theoretically defensible rather than turning the model into a purely statistical fit exercise.

Why must the model be re-scaled after deleting an indicator?

Deleting an indicator can remove the reference point used to set the factor’s metric. The transcript shows that after removing an indicator, the chi-square changes because the parameter was not fixed. The fix is to go to parameters and fix the parameter to 1 on the remaining indicator so the factor has a reference scaling point, then rerun the model.

How does the transcript handle conflicting fit indices, especially when RMSEA stays poor?

It treats the fit decision as multi-index. After improvements, the model is described as “reasonable” because most indices improve, and SRMR is checked using a plugin (standardized RMR) with an example SRMR around 0.3 described as very good. Even though RMSEA remains poor, the combination of improved CMIN/other fit measures and strong SRMR supports reporting the model as acceptably fitted.

Review Questions

  1. What specific AMOS outputs and thresholds are used to judge whether the initial CFA fit is unacceptable?
  2. When adding covariances from modification indices, what rule limits which error terms can be correlated?
  3. After removing an indicator, what parameter-fixing step restores factor scaling, and why is it necessary?

Key Points

  1. 1

    Start by cleaning the dataset: correct inconsistent values, assess missingness, and apply imputation before running CFA.

  2. 2

    Enable standardized estimates, squared multiple correlations, residual moments, and modification indices in AMOS to support targeted diagnostics.

  3. 3

    Use standardized regression weights as an early screen; consider deleting indicators with loadings below about 0.5 if they remain problematic.

  4. 4

    Add covariances only between error terms of indicators belonging to the same construct, guided by large modification indices (e.g., far above the default threshold).

  5. 5

    Iterate: rerun the model after each set of changes and re-check fit indices (CMIN, GFI/AGFI, CFI/TLI, RMSEA).

  6. 6

    After modification-index adjustments, inspect standardized residual covariances; remove indicators tied to residual covariance values above about 2.

  7. 7

    If an indicator deletion removes the factor reference point, fix a parameter to 1 to re-establish scaling, then rerun and re-check fit (including SRMR via the standardized RMR plugin).

Highlights

A controlled refinement loop—data cleaning → factor loading checks → same-construct error covariances → residual covariance cleanup—drives the model toward acceptable fit.
Large modification indices (like E11–E12) can improve fit, but the transcript insists on theoretical constraints: correlate error terms only within the same construct.
Indicator deletion can break factor scaling; fixing a parameter to 1 restores the reference point and stabilizes the model.
SRMR is treated as a decisive cross-check: SRMR is computed via a plugin (standardized RMR) when RMSEA remains unsatisfactory.

Topics

Mentioned

  • CB-SEM
  • CFA
  • AMOS
  • PO
  • RMSEA
  • SRMR
  • CFI
  • TLI
  • GFI
  • AGFI
  • CMIN