Get AI summaries of any video or article — Sign up free
How to Establish Discriminant Validity by using Cross Loadings in SmartPLS thumbnail

How to Establish Discriminant Validity by using Cross Loadings in SmartPLS

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Discriminant validity is assessed by checking whether each indicator loads highest on its own parent construct in the cross-loading matrix.

Briefing

Discriminant validity in SmartPLS can be checked directly through cross-loadings: each indicator should load highest on its own construct, and its loading should drop noticeably when the indicator is assigned to other constructs. The practical test is to compare an indicator’s loading on its parent construct versus its loadings on the competing constructs. If the indicator’s loading is clearly stronger on its own construct, discriminant validity holds; if not, the indicator is a candidate for removal.

Using an example table, the indicator ASR1 is shown with a loading of 0.869 under the Assurance construct (ASR). When ASR1 is instead evaluated against the other constructs—O and OP—its loadings fall substantially (the loading decreases when ASR1 is paired with O, and it also decreases when paired with OP). The same pattern appears for the other ASR indicators: each one loads better on Assurance than on the alternative constructs. That consistent “highest on parent construct” behavior indicates no discriminant validity problems for the ASR/Assurance measurement.

The same logic is applied to the O construct. For instance, O1 loads at 0.667 on its own parent construct O, but its loading drops when O1 is evaluated with ASR (0.416) or with OP (0.376). The remaining O indicators show the same trend: they load higher on O than on the other constructs. For OP, OP1 loads at 0.804 on its own parent construct, and its loading decreases when moved to O or ASR. Across ASR, O, and OP, the indicators behave as expected—stronger loadings on their own construct than on the others—so discriminant validity is supported.

When discriminant validity is suspected to be weak, the workflow shifts to cross-loadings. If an indicator’s loading on a competing construct is close to (or exceeds) its loading on its parent construct, that indicator may need to be deleted. The transcript gives a hypothetical scenario where the correlation between ASR and O is higher than a threshold (described as the square root of a value for ASR). In that case, cross-loadings are consulted: if ASR1 shows a loading of 0.910 on the “wrong” construct while its loading on its own construct is 0.869, the difference is small—less than 0.10. The guidance is to remove the indicator when the gap is under 0.10, because the item is not cleanly measuring only its intended construct.

Even when the difference is slightly above 0.10 (e.g., 0.11 or 0.12), caution is advised: small gaps can still signal overlapping measurement. The overall takeaway is straightforward: discriminant validity is established when each indicator’s cross-loading pattern clearly favors its parent construct, and it is repaired by deleting indicators that load too similarly across constructs.

Cornell Notes

Cross-loadings in SmartPLS are used to verify discriminant validity by checking whether each indicator loads highest on its own construct. In the example, ASR indicators (like ASR1) have strong loadings on Assurance (e.g., 0.869) and drop when evaluated with other constructs (O and OP). O indicators show the same pattern: O1 loads at 0.667 on O but falls to 0.416 with ASR and 0.376 with OP. OP indicators also load best on OP (e.g., OP1 at 0.804) and decrease when moved to other constructs. If discriminant validity looks problematic (e.g., high construct correlation), cross-loadings identify indicators that load on the wrong construct; those indicators are removed when the loading difference is small (under 0.10), with extra caution even when slightly above.

What does a “good” cross-loading pattern look like for discriminant validity in SmartPLS?

Each indicator should load highest on its own parent construct. For example, ASR1 loads at 0.869 under Assurance (ASR), but its loading decreases when ASR1 is assessed with O and with OP. The same “highest on parent, lower on others” pattern appears for O indicators (e.g., O1 at 0.667 on O vs. 0.416 with ASR and 0.376 with OP) and for OP indicators (e.g., OP1 at 0.804 on OP vs. lower values when paired with O or ASR).

How can cross-loadings confirm discriminant validity when construct correlations look fine?

When cross-loadings show that every indicator’s loading is strongest on its own construct, discriminant validity is supported even without further intervention. In the example, all ASR indicators load best on ASR, all O indicators load best on O, and all OP indicators load best on OP, indicating no discriminant validity issues across the measurement model.

What should trigger a closer look at cross-loadings?

A suspected discriminant validity problem—such as when the correlation between two constructs is high relative to a threshold—should prompt cross-loading checks. The transcript describes a hypothetical case where the correlation between ASR and O is higher than a threshold (described as the square root of a value for ASR). That kind of signal leads directly to inspecting whether specific indicators load more strongly on the competing construct.

How does the “delete the indicator” rule work when cross-loadings show overlap?

If an indicator loads more strongly on a competing construct than on its parent construct, and the difference is small, the indicator becomes a candidate for deletion. In the hypothetical ASR vs. O case, ASR1 loads at 0.910 on the wrong construct versus 0.869 on its own construct; the gap is less than 0.10. The guidance is to delete that indicator to restore discriminant validity. The same logic is applied to other indicators as well.

Why is caution needed even when the loading difference is slightly above 0.10?

The transcript warns that even if the difference is 0.11 or 0.12, the overlap may still be practically meaningful. So while the explicit rule is “difference less than 0.10,” small gaps above that threshold can still indicate discriminant validity weakness and warrant careful judgment.

Review Questions

  1. For a given indicator, what comparison should be made between its parent-construct loading and its cross-loadings to judge discriminant validity?
  2. In the hypothetical ASR vs. O scenario, what cross-loading pattern would indicate that ASR1 should be removed?
  3. How would you proceed if construct correlations suggest discriminant validity issues but cross-loadings still show clear “highest on parent” loadings?

Key Points

  1. 1

    Discriminant validity is assessed by checking whether each indicator loads highest on its own parent construct in the cross-loading matrix.

  2. 2

    A strong discriminant validity pattern shows large drops in an indicator’s loading when it is evaluated against other constructs.

  3. 3

    In the example, ASR indicators (e.g., ASR1 at 0.869) decrease when paired with O or OP, supporting discriminant validity for ASR.

  4. 4

    O indicators (e.g., O1 at 0.667) load best on O and drop when evaluated with ASR or OP, supporting discriminant validity for O.

  5. 5

    OP indicators (e.g., OP1 at 0.804) load best on OP and decrease when evaluated with ASR or O, supporting discriminant validity for OP.

  6. 6

    When construct correlation signals potential problems, cross-loadings identify indicators that load too similarly across constructs.

  7. 7

    Indicators are typically removed when the loading difference between the wrong and correct constructs is under 0.10, with extra caution even when slightly above (e.g., 0.11–0.12).

Highlights

Cross-loadings provide a direct discriminant validity check: each indicator should peak on its own construct and fall on others.
ASR1 illustrates the rule: 0.869 on Assurance (ASR) versus lower loadings when treated as belonging to O or OP.
O1 shows the same pattern: 0.667 on O, but 0.416 with ASR and 0.376 with OP.
A practical repair step is deletion: if an indicator’s cross-loading on the wrong construct is close to its parent loading (gap < 0.10), remove it.
Even small gaps slightly above 0.10 (0.11–0.12) can still reflect overlapping measurement, so judgment matters.

Mentioned

  • ASR
  • OP