#SmartPLS4 Series 12 - How to Interpret Measurement Model Output with Multiple LOCs?
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Outer loadings should be reviewed first, but indicators should not be deleted solely because loadings are below 0.70.
Briefing
Interpreting a SmartPLS measurement model with higher-order constructs comes down to a disciplined checklist: verify outer loadings first, then confirm reliability and convergent validity, followed by discriminant validity checks. In this walkthrough, the model’s outer loadings are reviewed in the final results, with a key decision rule: items are not removed just because their loading falls below 0.70. Instead, deletion is reserved for cases where reliability and validity—especially convergent validity—are poor. When the construct “OC” shows acceptable reliability and validity, the guidance is clear: keeping the item is preferable because removing it would not improve the measurement quality.
After the outer-loadings review, the workflow shifts to reliability and validity metrics. Reliability is assessed using alpha values (reported as above 0.7 across constructs), and convergent validity is supported when the values fall within acceptable bounds and exceed the commonly used threshold of 0.50 for AVE. The logic is that indicators should converge to measure the same underlying latent variable, producing consistent internal measurement.
Discriminant validity then gets attention through multiple lenses. The heterotrait–monotrait ratio (HTMT) is checked, with results turning green overall; one value is slightly above 0.85 but still remains below 0.90, and the ratio remains higher for its own construct than for other constructs within the higher-order structure. Another discriminant validity test uses the square root of AVE: for each construct, the square root of AVE is higher than the correlations with other latent variables, indicating within-construct variance exceeds shared variance. Cross-loadings are also inspected directly: indicators such as ASR1 and EMP1 load more strongly on their intended parent constructs than on competing constructs.
A recurring nuance is that higher-order constructs can make some cross-loadings look “high” without breaking discriminant validity. In the example, empathy and assurance are dimensions under the same higher-order internal service quality construct. That structural relationship explains why an indicator like EMP1 can show a strong loading on both its own dimension and another dimension—yet discriminant validity still holds because the constructs are nested within the same higher-order framework.
Finally, the transcript notes reporting considerations and presentation mechanics. The results can be displayed in list or matrix formats depending on whether the goal is paper reporting or on-screen review. There’s also a practical step for organizing constructs in the output: renaming constructs with ordered prefixes (e.g., “01.”, “02.”) so independent variables, mediators, and dependent variables appear in a preferred sequence. The session closes by summarizing the full measurement-model checklist: outer loadings, reliability and convergent validity, discriminant validity (HTMT, square-root of AVE, and cross-loadings), and optional multicollinearity reporting via VIF (noting that VIF values should be below 5). The next session is teased as a deeper dive into how to report the measurement model in detail.
Cornell Notes
The measurement-model interpretation workflow for SmartPLS with higher-order constructs follows a clear order: check outer loadings, then confirm reliability and convergent validity, and finally verify discriminant validity. Items should not be deleted solely because an outer loading is below 0.70; deletion is justified only when reliability and validity are poor. Reliability is supported when alpha values exceed 0.7 and convergent validity is supported when AVE values exceed 0.50. Discriminant validity is established when HTMT ratios are below 0.90, the square root of AVE is higher than inter-construct correlations, and cross-loadings show stronger loading on the intended construct. Cross-loadings can still be high when dimensions belong to the same higher-order construct, without undermining discriminant validity.
Why shouldn’t an indicator be removed just because its outer loading is below 0.70?
What metrics confirm reliability and convergent validity in this measurement-model check?
How is discriminant validity verified beyond just looking at loadings?
How do higher-order constructs affect interpretation of cross-loadings?
What practical steps help present measurement-model results clearly in a paper?
Review Questions
- When is it appropriate to delete an indicator with an outer loading below 0.70?
- Which three discriminant-validity checks are mentioned, and what does each one compare?
- Why can an indicator show a high cross-loading in a higher-order construct model without necessarily failing discriminant validity?
Key Points
- 1
Outer loadings should be reviewed first, but indicators should not be deleted solely because loadings are below 0.70.
- 2
Deletion decisions should depend on whether reliability and validity (especially convergent validity) are poor, not on loading thresholds alone.
- 3
Reliability is supported when alpha values are above 0.7 across constructs.
- 4
Convergent validity is supported when AVE values exceed 0.50, indicating indicators converge on the same latent variable.
- 5
Discriminant validity is supported when HTMT ratios are below 0.90 and the square root of AVE exceeds inter-construct correlations.
- 6
Cross-loadings should show stronger loadings on the intended parent construct, but higher-order nesting can explain secondary loadings.
- 7
VIF can be reported for multicollinearity checks, with the transcript noting VIF values should be below 5 if included.