Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series 14 - Step wise Demo | How to Resolve Discriminant Validity Problems? thumbnail

#SmartPLS4 Series 14 - Step wise Demo | How to Resolve Discriminant Validity Problems?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Compute standard deviation for indicators and delete items with near-zero variance or standard deviation below 0.25 to remove low-quality response patterns.

Briefing

Discriminant validity problems in SmartPLS can be fixed through a step-by-step cleanup process that targets the specific items driving HTMT failures—starting with data quality checks, then iteratively removing the most problematic indicators based on cross-loadings. In this walkthrough, multiple constructs initially show red-flag discriminant validity results, including HTMT ratios that exceed the recommended threshold (notably values described as “over 0.90”). The process matters because discriminant validity is what ensures constructs are empirically distinct; when it fails, structural model interpretations become unreliable.

The first practical move is “cleaning of data” using item-level standard deviation. The workflow checks the standard deviation of all indicators within each construct and flags cases where respondents answered identically across items (standard deviation effectively at or near zero). Indicators with standard deviation below the preferred cutoff (given as greater than 0.25) are deleted. After removing the low-variance items, the dataset is re-imported and the PLS algorithm is rerun. This reduces some HTMT issues but doesn’t fully resolve discriminant validity—new results show improvement overall, yet the problematic pairing remains.

Next, the focus shifts to the discriminant validity pair that still fails: EC with FP. The analysis uses discriminant validity cross-loadings to identify which indicators load too strongly on the “wrong” construct. For FP vs EC, the walkthrough compares loading differences against a 0.10 benchmark. Some FP items behave acceptably, but FP4 is singled out as loading substantially on EC as well as its own factor. After deleting FP4 and rerunning the model, the EC–FP issue persists, but the cross-loading scan reveals a new culprit: FP5. Deleting FP5 resolves the EC–FP HTMT problem.

With EC–FP fixed, attention turns to the next failing pair: EC with LC. The same standard deviation cleanup is repeated for the EC–LC indicators, followed by another discriminant validity check using HTMT and cross-loadings. Here, the walkthrough finds that LC2 is the problematic indicator because it loads strongly on EC (the “other” construct) rather than cleanly separating. Deleting LC2 and rerunning improves the EC–LC situation, but a remaining issue appears in the EC–LC relationship, traced to another indicator that still cross-loads too closely.

The final resolution comes after removing LC1. After LC1 is deleted and the PLS algorithm is rerun, HTMT indicates that the EC–LC discriminant validity problem is resolved. The session closes with a key practical reminder: beyond discriminant validity metrics (HTMT and cross-loading differences), outer loadings must remain substantial so the measurement model still retains convergent validity while achieving construct separation. The overall takeaway is a “painstaking but systematic” loop: clean low-variance indicators, rerun, identify the failing construct pair, remove the specific cross-loading indicator(s), and repeat until HTMT and cross-loadings align without breaking the measurement model.

Cornell Notes

The walkthrough fixes SmartPLS discriminant validity failures by repeatedly cleaning indicators and then deleting the specific items that cause cross-construct overlap. It starts with standard deviation checks, removing indicators with standard deviation below 0.25 (including cases where respondents gave identical answers, producing near-zero variance). After rerunning PLS and rechecking HTMT, the remaining failure is traced to particular indicator cross-loadings between construct pairs (first EC–FP, then EC–LC). Indicators are removed one at a time based on loading-difference thresholds (around 0.10) and whether an item loads strongly on both its own construct and the competing construct. The process ends when HTMT no longer flags the construct pairs and outer loadings remain substantial.

Why does the session begin with standard deviation, and what gets deleted?

It treats low-variance indicators as a data-quality driver of measurement problems. Standard deviation is computed for all items within constructs (STD). Indicators with no meaningful variation—where standard deviation is effectively zero because respondents answered the same value across items—are deleted. The walkthrough also applies a cutoff: standard deviation should be greater than 0.25; items below that threshold (e.g., those appearing “less than 0.25” after sorting) are removed before rerunning the PLS algorithm.

How does the workflow identify which construct pair still fails after initial cleanup?

After deleting low-variance indicators and rerunning PLS, it rechecks discriminant validity using HTMT. Even when some HTMT values improve, the remaining red-flag pairing is pinpointed (first EC with FP, later EC with LC). The session repeatedly uses the discriminant validity report to see which pairing still violates the recommended threshold (described as around 0.90 for HTMT).

What is the decision rule for deleting indicators using cross-loadings in EC–FP?

It compares cross-loading differences against a 0.10 benchmark. For each indicator, the loading on its own parent factor is compared to its loading on the competing factor; if the difference is not large enough (difference less than 0.10), the indicator is treated as problematic. In the EC–FP case, FP4 is removed first because it loads substantially on EC as well as FP; after rerunning, FP5 is then removed when it remains problematic.

Why does the session delete items from the problematic construct rather than only adjusting the other construct?

The cross-loading evidence points to specific indicators that behave badly—loading too strongly on the “wrong” factor. The walkthrough explicitly avoids deleting only from one side without justification; it checks both directions (FP vs EC and EC vs FP) and then deletes the indicator(s) that fail the loading-difference criteria. For EC–FP, the problematic indicators are FP4 and FP5; for EC–LC, the problematic indicators are LC2 and then LC1.

What happens when EC–LC still fails even after standard deviation cleanup?

HTMT still flags EC–LC, so the workflow moves to cross-loadings. It identifies LC2 as loading too strongly on EC (cross-loading too close to its own-factor loading). Deleting LC2 and rerunning reduces the problem but doesn’t fully resolve it; a further cross-loading check then leads to deleting LC1 to finally clear the EC–LC HTMT issue.

What final measurement-model constraint is emphasized besides discriminant validity metrics?

Outer loadings must remain substantial. The session notes that discriminant validity fixes should not come at the cost of weak measurement quality; after deleting problematic indicators, the model should still show adequate outer loadings so convergent validity is preserved while discriminant validity is achieved.

Review Questions

  1. When standard deviation is used as a first-pass filter, what cutoff is applied and what does near-zero standard deviation indicate about respondent behavior?
  2. In the EC–FP stage, how does the loading-difference threshold (around 0.10) determine whether an indicator like FP4 or FP5 should be deleted?
  3. After EC–LC standard deviation cleanup, what specific cross-loading evidence leads to deleting LC2 and then LC1?

Key Points

  1. 1

    Compute standard deviation for indicators and delete items with near-zero variance or standard deviation below 0.25 to remove low-quality response patterns.

  2. 2

    Rerun the PLS algorithm after each cleanup step and recheck discriminant validity using HTMT to identify which construct pair still fails.

  3. 3

    Use discriminant validity cross-loadings to pinpoint the exact indicator(s) causing overlap between the failing construct pair.

  4. 4

    Apply a loading-difference criterion around 0.10: if an indicator loads too similarly on its own factor and the competing factor, treat it as problematic.

  5. 5

    Delete problematic indicators one at a time (e.g., FP4 then FP5 for EC–FP; LC2 then LC1 for EC–LC) and rerun after each deletion to confirm improvement.

  6. 6

    Maintain substantial outer loadings after indicator removal so the measurement model remains valid while discriminant validity is restored.

Highlights

Standard deviation cleanup can dramatically reduce HTMT discriminant validity failures by removing indicators with identical or nearly identical responses across items.
The EC–FP problem is resolved through targeted indicator deletions driven by cross-loading differences: FP4 first, then FP5 after rerunning.
The EC–LC issue persists until LC2 is removed and, after further cross-loading checks, LC1 is also deleted.
Fixing discriminant validity in SmartPLS is an iterative loop: clean → rerun → diagnose with HTMT/cross-loadings → delete the specific offending indicator(s) → verify outer loadings.

Mentioned

  • HTMT
  • PLS
  • STD
  • EC
  • FP
  • LC