Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series 12 - How to Interpret Measurement Model Output with Multiple LOCs? thumbnail

#SmartPLS4 Series 12 - How to Interpret Measurement Model Output with Multiple LOCs?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Outer loadings should be reviewed first, but indicators should not be deleted solely because loadings are below 0.70.

Briefing

Interpreting a SmartPLS measurement model with higher-order constructs comes down to a disciplined checklist: verify outer loadings first, then confirm reliability and convergent validity, followed by discriminant validity checks. In this walkthrough, the model’s outer loadings are reviewed in the final results, with a key decision rule: items are not removed just because their loading falls below 0.70. Instead, deletion is reserved for cases where reliability and validity—especially convergent validity—are poor. When the construct “OC” shows acceptable reliability and validity, the guidance is clear: keeping the item is preferable because removing it would not improve the measurement quality.

After the outer-loadings review, the workflow shifts to reliability and validity metrics. Reliability is assessed using alpha values (reported as above 0.7 across constructs), and convergent validity is supported when the values fall within acceptable bounds and exceed the commonly used threshold of 0.50 for AVE. The logic is that indicators should converge to measure the same underlying latent variable, producing consistent internal measurement.

Discriminant validity then gets attention through multiple lenses. The heterotrait–monotrait ratio (HTMT) is checked, with results turning green overall; one value is slightly above 0.85 but still remains below 0.90, and the ratio remains higher for its own construct than for other constructs within the higher-order structure. Another discriminant validity test uses the square root of AVE: for each construct, the square root of AVE is higher than the correlations with other latent variables, indicating within-construct variance exceeds shared variance. Cross-loadings are also inspected directly: indicators such as ASR1 and EMP1 load more strongly on their intended parent constructs than on competing constructs.

A recurring nuance is that higher-order constructs can make some cross-loadings look “high” without breaking discriminant validity. In the example, empathy and assurance are dimensions under the same higher-order internal service quality construct. That structural relationship explains why an indicator like EMP1 can show a strong loading on both its own dimension and another dimension—yet discriminant validity still holds because the constructs are nested within the same higher-order framework.

Finally, the transcript notes reporting considerations and presentation mechanics. The results can be displayed in list or matrix formats depending on whether the goal is paper reporting or on-screen review. There’s also a practical step for organizing constructs in the output: renaming constructs with ordered prefixes (e.g., “01.”, “02.”) so independent variables, mediators, and dependent variables appear in a preferred sequence. The session closes by summarizing the full measurement-model checklist: outer loadings, reliability and convergent validity, discriminant validity (HTMT, square-root of AVE, and cross-loadings), and optional multicollinearity reporting via VIF (noting that VIF values should be below 5). The next session is teased as a deeper dive into how to report the measurement model in detail.

Cornell Notes

The measurement-model interpretation workflow for SmartPLS with higher-order constructs follows a clear order: check outer loadings, then confirm reliability and convergent validity, and finally verify discriminant validity. Items should not be deleted solely because an outer loading is below 0.70; deletion is justified only when reliability and validity are poor. Reliability is supported when alpha values exceed 0.7 and convergent validity is supported when AVE values exceed 0.50. Discriminant validity is established when HTMT ratios are below 0.90, the square root of AVE is higher than inter-construct correlations, and cross-loadings show stronger loading on the intended construct. Cross-loadings can still be high when dimensions belong to the same higher-order construct, without undermining discriminant validity.

Why shouldn’t an indicator be removed just because its outer loading is below 0.70?

The transcript emphasizes a decision rule: do not delete items purely for having a loading under 0.70. The real test is whether reliability and validity improve. If reliability and convergent validity are already acceptable (as illustrated with the “OC” construct), removing the indicator would not meaningfully improve the measurement model. Deletion is reserved for cases where convergent validity is poor—i.e., when reliability/validity metrics indicate the indicator is harming measurement quality.

What metrics confirm reliability and convergent validity in this measurement-model check?

Reliability is assessed using alpha values, with the walkthrough noting values above 0.7 across constructs. Convergent validity is supported when AVE values exceed 0.50 (the transcript describes the AVE falling between two reference bounds and being greater than 0.50). Together, these indicate indicators converge on the same latent variable and the scale is internally consistent.

How is discriminant validity verified beyond just looking at loadings?

Discriminant validity is checked using multiple outputs: (1) HTMT ratios (all green overall, with a noted value slightly above 0.85 but still below 0.90), (2) the square root of AVE being higher than correlations with other latent variables (within-construct variance greater than shared variance), and (3) cross-loadings where indicators load more strongly on their own parent construct than on others. The transcript also notes that cross-loadings can show secondary loadings without violating discriminant validity when constructs are structurally related.

How do higher-order constructs affect interpretation of cross-loadings?

Because dimensions can be subdimensions of the same higher-order construct, indicators may load strongly on multiple related dimensions. The transcript’s example is empathy and assurance as dimensions of internal service quality: EMP1 can load well on its own dimension and also show a high loading on Assurance. Even so, discriminant validity is not treated as a problem because the higher-order structure explains the shared variance.

What practical steps help present measurement-model results clearly in a paper?

The transcript recommends choosing a reporting format (matrix versus list) and using SmartPLS tools to rearrange output. It also provides a method to order constructs in the report by renaming them with numeric prefixes (e.g., “01.” for independent variables, “02.” for mediators, “04.” for dependent variables). This makes the final results easier to read and align with the paper’s conceptual flow.

Review Questions

  1. When is it appropriate to delete an indicator with an outer loading below 0.70?
  2. Which three discriminant-validity checks are mentioned, and what does each one compare?
  3. Why can an indicator show a high cross-loading in a higher-order construct model without necessarily failing discriminant validity?

Key Points

  1. 1

    Outer loadings should be reviewed first, but indicators should not be deleted solely because loadings are below 0.70.

  2. 2

    Deletion decisions should depend on whether reliability and validity (especially convergent validity) are poor, not on loading thresholds alone.

  3. 3

    Reliability is supported when alpha values are above 0.7 across constructs.

  4. 4

    Convergent validity is supported when AVE values exceed 0.50, indicating indicators converge on the same latent variable.

  5. 5

    Discriminant validity is supported when HTMT ratios are below 0.90 and the square root of AVE exceeds inter-construct correlations.

  6. 6

    Cross-loadings should show stronger loadings on the intended parent construct, but higher-order nesting can explain secondary loadings.

  7. 7

    VIF can be reported for multicollinearity checks, with the transcript noting VIF values should be below 5 if included.

Highlights

Items aren’t removed just because an outer loading dips under 0.70; acceptable reliability and validity mean there’s no measurement benefit to deletion.
Discriminant validity is confirmed through HTMT (below 0.90), square-root-of-AVE dominance over correlations, and cross-loading patterns.
High cross-loadings can be expected when dimensions belong to the same higher-order construct, such as empathy and assurance under internal service quality.
Construct ordering in reports can be controlled by renaming with numeric prefixes (e.g., “01.”, “02.”) so independent variables, mediators, and dependent variables appear in the desired sequence.

Mentioned

  • OC
  • AVE
  • HTMT
  • VIF