What is Discriminant Validity? How to Check Discriminant Validity with different methods in SmartPLS
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Discriminant validity ensures each latent construct is statistically distinct from every other construct in the measurement model.
Briefing
Discriminant validity is the statistical check that each latent construct in a questionnaire-based study is truly distinct from the others—so “O” is not just measuring the same underlying thing as “OP,” and so on. That distinctiveness matters because constructs in social science models often overlap conceptually, and without discriminant validity tests, the measurement model can end up treating different constructs as if they were interchangeable.
A common starting point is the Fornell–Larcker criterion, which compares each construct’s shared variance with its correlations to other constructs. The method uses the square root of AVE (average variance extracted) for a construct and requires that this value be larger than the construct’s correlations with all other constructs. In the example, the square root of AVE for construct O is 0.793, and the correlation between O and OP is 0.633—so 0.793 exceeds 0.633, supporting discriminant validity for O. The same logic is applied to OP: its square root of AVE is 0.861, and its correlations with other constructs (like O at 0.633 and the additional construct ASR at 0.575) remain lower than 0.861. When the square root of AVE for each construct sits above its cross-construct correlations, the model indicates that within-construct variance dominates shared variance, which is exactly what discriminant validity is meant to capture.
The transcript then adds a third construct (ASR) to show how the comparisons scale. For each construct, the square root of AVE must exceed correlations with every other construct in the model. An Excel-style table is used to make the rule concrete: the square root of AVE for ASR is higher than its correlations with O and OP, and the square root of AVE for O and OP is likewise higher than their respective correlations with the remaining constructs. When all these pairwise checks hold, discriminant validity is considered established.
Two additional methods are presented. First is HTMT (heterotrait–monotrait ratio), described as more widely used in current journals. HTMT relies on indicator correlations across constructs (heterotrait) versus within the same construct (monotrait). The decision rule is straightforward: HTMT values should be below 0.85, or sometimes a more liberal threshold of 0.90. In the example, the HTMT ratio between ASR and O is within the acceptable range, so discriminant validity passes.
Second is cross loading. Here, each indicator should load highest on its own parent construct rather than on other constructs. The transcript walks through indicator-level loadings: for ASR1, the loading is strongest under ASR (0.869) and drops when placed under O or OP, and the same pattern holds for the other ASR indicators. For the O indicators, loadings are higher on O than on ASR or OP, and for OP indicators, loadings are highest on OP (e.g., 0.804) rather than on the other constructs. If discriminant validity were failing—such as when an indicator’s loading on the wrong construct becomes comparable to its loading on its own construct—then the indicator should be removed. The guideline given is to look for cases where the difference between the highest and second-highest loadings is less than 0.10; even if the difference is slightly above 0.10 (like 0.11 or 0.12), the transcript advises caution because the separation may still be weak.
Cornell Notes
Discriminant validity checks whether each latent construct in a SmartPLS measurement model is empirically distinct from the others. Using the Fornell–Larcker criterion, the square root of AVE for a construct must be higher than that construct’s correlations with every other construct—evidence that within-construct variance exceeds shared variance. The transcript then shows how the same logic extends when a third construct (ASR) is added, with pairwise comparisons in an AVE/correlation matrix. It also introduces HTMT (heterotrait–monotrait ratio), where values below 0.85 (or 0.90 more liberally) indicate discriminant validity. Finally, cross loading requires each indicator to load highest on its own parent construct; if an indicator loads almost as strongly on another construct (difference under ~0.10), it may need deletion.
How does the Fornell–Larcker criterion establish discriminant validity in SmartPLS?
What does “within-construct variance exceeds shared variance” mean in practice for discriminant validity?
Why does adding a third construct (ASR) change the discriminant validity checks?
What decision rule is used for HTMT, and what does a “green” result imply?
How do cross loadings reveal discriminant validity problems at the indicator level?
Review Questions
- In the Fornell–Larcker approach, what exact inequality must hold between √AVE and correlations for discriminant validity?
- What HTMT threshold values are used to judge discriminant validity, and what does HTMT measure conceptually?
- When would an indicator be removed based on cross loadings, and what loading difference rule is suggested?
Key Points
- 1
Discriminant validity ensures each latent construct is statistically distinct from every other construct in the measurement model.
- 2
Under Fornell–Larcker, the square root of AVE for a construct must exceed that construct’s correlations with all other constructs.
- 3
Adding more constructs increases the number of pairwise √AVE-versus-correlation comparisons required to confirm discriminant validity.
- 4
HTMT (heterotrait–monotrait ratio) is widely preferred; values below 0.85 (or 0.90 more liberally) indicate discriminant validity.
- 5
Cross loading requires each indicator to load highest on its own parent construct; near-equal loadings on another construct signal a problem.
- 6
A practical deletion rule is to remove indicators when the difference between the top loading and the next-best loading is less than ~0.10, with extra caution even when the gap is slightly larger (e.g., 0.11–0.12).