CBSEM using #SmartPLS4 | 8 | What Fit Indices to use for Model Fit?
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Model fit in SEM is a feasibility check: whether the proposed theoretical model aligns with the observed data.
Briefing
Model fit in structural equation modeling is essentially a feasibility check: whether the collected data actually aligns with the proposed theoretical model. After building a model from theory, researchers estimate it with data in SmartPLS4 (or similar SEM software), which returns overall fit statistics plus parameter estimates. Fit indices act as the main diagnostic tools—each one comes with commonly used “cut-off” thresholds for judging whether the model is acceptable, mediocre, or poor.
A key warning is that no single statistic is universally decisive. Chi-square (reported as a p-value) is widely used, but it is highly sensitive to sample size: with larger samples, even small mismatches can produce significant chi-square results, making the model look worse than it may be in practical terms. The transcript lists typical thresholds: for chi-square, the p-value should exceed 0.05; for GFI and AGFI, values above 0.95 and 0.90 respectively; for NFI and TLI/NFI-type indices, often above 0.95 (though some references relax this to 0.90); for CFI, above 0.90; RMSEA below 0.08; SRMR below 0.08; and AVE (average variance extracted) above 0.50 as a convergent validity indicator.
The session also highlights an explicit choice about which cut-offs to apply. Instead of using stricter criteria for some indices, the approach becomes more “liberal” for fit evaluation—using 0.90 as the benchmark for CFI and TLI, and 0.90 for GFI/AGFI rather than the more demanding 0.95/0.90 split sometimes seen in stricter guidance. This matters because fit decisions can change depending on which thresholds are adopted.
To demonstrate, the workflow is applied to a CBSEM-based structural equation model in SmartPLS4. The focus first lands on the measurement model (the transcript references factor/construct assessment and the presence of a covariance structure), and then the model is run to obtain fit statistics. The resulting diagnostics are compared against the chosen cut-offs.
Several indices point in the same direction: chi-square significance is below 0.05, suggesting poor fit under the chi-square rule, and AGFI is reported around 0.83, which falls short of the 0.90 requirement. GFI is about 0.95, but the comparison still leads to a negative conclusion in the overall judgment. Other indices are mixed: NFI meets the threshold (reported as acceptable), TLI is near 0.9 (treated as borderline/appropriate), and CFI is at or above 0.90. Error-based indices are not uniformly strong—RMSEA is described as relatively high (indicating elevated approximation error), while SRMR is acceptable. Taken together, the final verdict is a “mediocre fit,” meaning the model is not clearly rejected but also not convincingly supported by the full set of fit statistics.
The practical takeaway is straightforward: SmartPLS4 fit outputs should be interpreted through a set of indices and thresholds, with special attention to sample-size sensitivity (chi-square) and the fact that different indices can disagree. The transcript’s example shows how that disagreement translates into an overall, cautious assessment rather than a binary pass/fail decision.
Cornell Notes
Fit indices in SEM measure how well the proposed theoretical model matches the observed data. SmartPLS4 produces overall model fit statistics and parameter estimates after data are entered into the specified model. Common cut-offs include chi-square p > 0.05, GFI > 0.95 (or 0.90 in a more liberal approach), AGFI > 0.90, CFI/TLI > 0.90, RMSEA < 0.08, SRMR < 0.08, and AVE > 0.50 for convergent validity. In the worked CBSEM measurement-model example, chi-square significance and AGFI fail the thresholds, RMSEA is described as high, while NFI/CFI/TLI are closer to acceptable. The combined result is judged as a mediocre fit.
Why is chi-square (via its p-value) treated cautiously when judging model fit?
What threshold set is used to judge fit in the session, and how does it differ from stricter guidance?
Which fit indices are highlighted as common decision tools, and what are their typical cut-offs?
In the SmartPLS4 example, which statistics suggest weak fit?
Which statistics in the example support at least partial adequacy?
Review Questions
- If chi-square p-value is below 0.05 but other indices like CFI and SRMR are acceptable, how should that tension influence the final fit judgment?
- How would changing from stricter cut-offs (e.g., 0.95) to a more liberal 0.90 benchmark for CFI/TLI likely affect model acceptance decisions?
- Why is AVE > 0.50 grouped with fit evaluation, even though it targets convergent validity rather than overall model fit?
Key Points
- 1
Model fit in SEM is a feasibility check: whether the proposed theoretical model aligns with the observed data.
- 2
SmartPLS4 outputs overall fit statistics and parameter estimates after the model is specified and data are analyzed.
- 3
Chi-square p-values are sensitive to sample size, so they can over-flag poor fit in large samples.
- 4
Common fit thresholds include CFI/TLI > 0.90, RMSEA < 0.08, SRMR < 0.08, and AVE > 0.50 for convergent validity.
- 5
Using stricter versus liberal cut-offs (e.g., 0.95 vs 0.90 for some indices) can change the fit conclusion.
- 6
In the worked example, chi-square significance and AGFI fail thresholds, RMSEA is high, but NFI/CFI/TLI and SRMR are closer to acceptable—leading to a “mediocre fit” verdict.
- 7
Fit assessment should integrate multiple indices rather than relying on a single statistic.