Get AI summaries of any video or article — Sign up free
Conceptualize, Analyze, and Interpret Discriminant Validity using #SmartPLS4 thumbnail

Conceptualize, Analyze, and Interpret Discriminant Validity using #SmartPLS4

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Run the PLS algorithm in SmartPLS 4 before checking discriminant validity results.

Briefing

Discriminant validity is the quality check that confirms overlapping constructs in social science research are truly distinct. In SmartPLS 4, it’s assessed after running the PLS-SEM algorithm, and the goal is straightforward: each construct should correlate less with other constructs than it does with itself. When discriminant validity holds, researchers can trust that measurement items are capturing different underlying concepts rather than repeating the same one under different labels.

The session walks through three common approaches used in SmartPLS 4: HTMT, the Fornell–Larcker criterion, and cross-loadings. For HTMT (heterotrait–monotrait ratio), the key decision rule is a threshold of 0.85 for a conservative test, with 0.90 sometimes used as a more liberal alternative. In the example model, all HTMT ratios appear below 0.85, so the constructs are treated as distinct. A special note addresses a blank entry for “development vs development op”: those two constructs are actually the same variable, so their correlation is effectively 1, leaving no meaningful ratio to report.

Next, the Fornell–Larcker criterion is used as a more traditional benchmark. The method compares the square root of each construct’s AVE (average variance extracted) against that construct’s correlations with all other constructs. In the walkthrough, the square root of AVE for “development” is computed from 0.694, giving approximately 0.833, and this value exceeds correlations between development and the other constructs. The same logic is applied to “op,” “rewards,” and “vision”: each construct’s square root of AVE is larger than its correlations with the other constructs, including the relevant pairwise comparisons (for example, development vs op, and rewards vs op). With those inequalities satisfied across the set, discriminant validity is considered established under Fornell–Larcker.

Finally, cross-loadings provide an item-level view of discriminant validity. Each construct is measured with multiple items (development: 7 items, op: 5, rewards: 4, vision: 3). The rule is that an item should load highest on its own theoretical construct compared with loadings on other constructs. The example shows that items for development load strongly on development (e.g., dev1 at 0.841) and drop when forced to load on op or rewards or vision. The same pattern holds for op items (e.g., op1 at 0.783 on op, decreasing to 0.530 when compared against development), rewards items, and vision items. Even when some items show secondary loadings, the analysis prioritizes the earlier HTMT and Fornell–Larcker results, using cross-loadings as supporting evidence.

Taken together, the workflow demonstrates how SmartPLS 4 can confirm construct distinctiveness using HTMT thresholds, AVE-based comparisons, and item-level loading patterns—so measurement models don’t accidentally conflate constructs that should remain separate.

Cornell Notes

Discriminant validity checks whether constructs that may overlap in social science research are actually distinct. In SmartPLS 4, it’s assessed after running the PLS algorithm using three methods: HTMT, Fornell–Larcker, and cross-loadings. HTMT uses heterotrait–monotrait ratios with a common conservative threshold of 0.85 (sometimes 0.90); in the example, all ratios fall below 0.85, indicating distinct constructs. Fornell–Larcker compares the square root of each construct’s AVE (e.g., sqrt(0.694) ≈ 0.833 for development) against that construct’s correlations with all other constructs; each construct’s value is larger, so discriminant validity is supported. Cross-loadings further confirm that items load highest on their own construct versus other constructs.

What does HTMT measure for discriminant validity, and what thresholds are used?

HTMT (heterotrait–monotrait ratio) measures how strongly indicators from different constructs relate compared with indicators within the same construct. A conservative discriminant validity threshold is 0.85; a more liberal threshold is 0.90. In the example, every HTMT ratio is below 0.85, so constructs are treated as distinct.

Why might an HTMT cell be empty for “development vs development op”?

That comparison is blank because “development” and “development op” are actually the same variable. With identical constructs, the correlation is effectively absolute (1), so there’s no meaningful HTMT ratio to report for discriminant validity between them.

How does the Fornell–Larcker criterion establish discriminant validity?

Fornell–Larcker requires that the square root of each construct’s AVE be greater than the correlations between that construct and all other constructs. The walkthrough computes sqrt(0.694) ≈ 0.833 for development and checks that this exceeds development’s correlations with the other constructs. The same comparison is performed for op, rewards, and vision, and discriminant validity is accepted when the inequality holds across all constructs.

What does cross-loading analysis look for at the item level?

Cross-loadings check whether each item loads highest on its own theoretical construct. For example, dev1 loads strongly on development (0.841) and drops when compared against op, rewards, or vision. Similarly, op1 loads highest on op (0.783) and decreases when considered against development (0.530). The pattern repeats for rewards and vision items, supporting discriminant validity.

If an item loads on another construct too, does that automatically fail discriminant validity?

Not necessarily. The walkthrough notes that even if some items show secondary loadings (e.g., an op item loading well on two constructs), the assessment still relies on the earlier HTMT and Fornell–Larcker results. Cross-loadings act as supporting evidence rather than the sole decision rule.

Review Questions

  1. In HTMT, what does a ratio below 0.85 imply about the relationship between two constructs?
  2. Under Fornell–Larcker, which quantity must be larger: the square root of AVE or the construct’s correlations with other constructs?
  3. How do cross-loadings demonstrate discriminant validity differently from HTMT and Fornell–Larcker?

Key Points

  1. 1

    Run the PLS algorithm in SmartPLS 4 before checking discriminant validity results.

  2. 2

    Use HTMT to test construct distinctiveness, with 0.85 as a conservative threshold (0.90 as a more liberal one).

  3. 3

    Apply Fornell–Larcker by comparing each construct’s sqrt(AVE) against its correlations with every other construct.

  4. 4

    Compute sqrt(AVE) from the construct’s AVE (e.g., sqrt(0.694) ≈ 0.833 for development) and verify it exceeds all cross-construct correlations.

  5. 5

    Use cross-loadings to confirm that each item loads highest on its intended construct compared with other constructs.

  6. 6

    Treat cross-loading secondary loadings as supporting signals, especially when HTMT and Fornell–Larcker already support discriminant validity.

Highlights

HTMT discriminant validity is accepted when all heterotrait–monotrait ratios stay below 0.85 in the example model.
Fornell–Larcker passes when each construct’s sqrt(AVE) (e.g., development’s sqrt(0.694) ≈ 0.833) exceeds its correlations with all other constructs.
Cross-loadings reinforce discriminant validity by showing items load strongest on their own construct (e.g., dev1 at 0.841 on development).

Mentioned