#SmartPLS4 Series 14 - Step wise Demo | How to Resolve Discriminant Validity Problems?
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Compute standard deviation for indicators and delete items with near-zero variance or standard deviation below 0.25 to remove low-quality response patterns.
Briefing
Discriminant validity problems in SmartPLS can be fixed through a step-by-step cleanup process that targets the specific items driving HTMT failures—starting with data quality checks, then iteratively removing the most problematic indicators based on cross-loadings. In this walkthrough, multiple constructs initially show red-flag discriminant validity results, including HTMT ratios that exceed the recommended threshold (notably values described as “over 0.90”). The process matters because discriminant validity is what ensures constructs are empirically distinct; when it fails, structural model interpretations become unreliable.
The first practical move is “cleaning of data” using item-level standard deviation. The workflow checks the standard deviation of all indicators within each construct and flags cases where respondents answered identically across items (standard deviation effectively at or near zero). Indicators with standard deviation below the preferred cutoff (given as greater than 0.25) are deleted. After removing the low-variance items, the dataset is re-imported and the PLS algorithm is rerun. This reduces some HTMT issues but doesn’t fully resolve discriminant validity—new results show improvement overall, yet the problematic pairing remains.
Next, the focus shifts to the discriminant validity pair that still fails: EC with FP. The analysis uses discriminant validity cross-loadings to identify which indicators load too strongly on the “wrong” construct. For FP vs EC, the walkthrough compares loading differences against a 0.10 benchmark. Some FP items behave acceptably, but FP4 is singled out as loading substantially on EC as well as its own factor. After deleting FP4 and rerunning the model, the EC–FP issue persists, but the cross-loading scan reveals a new culprit: FP5. Deleting FP5 resolves the EC–FP HTMT problem.
With EC–FP fixed, attention turns to the next failing pair: EC with LC. The same standard deviation cleanup is repeated for the EC–LC indicators, followed by another discriminant validity check using HTMT and cross-loadings. Here, the walkthrough finds that LC2 is the problematic indicator because it loads strongly on EC (the “other” construct) rather than cleanly separating. Deleting LC2 and rerunning improves the EC–LC situation, but a remaining issue appears in the EC–LC relationship, traced to another indicator that still cross-loads too closely.
The final resolution comes after removing LC1. After LC1 is deleted and the PLS algorithm is rerun, HTMT indicates that the EC–LC discriminant validity problem is resolved. The session closes with a key practical reminder: beyond discriminant validity metrics (HTMT and cross-loading differences), outer loadings must remain substantial so the measurement model still retains convergent validity while achieving construct separation. The overall takeaway is a “painstaking but systematic” loop: clean low-variance indicators, rerun, identify the failing construct pair, remove the specific cross-loading indicator(s), and repeat until HTMT and cross-loadings align without breaking the measurement model.
Cornell Notes
The walkthrough fixes SmartPLS discriminant validity failures by repeatedly cleaning indicators and then deleting the specific items that cause cross-construct overlap. It starts with standard deviation checks, removing indicators with standard deviation below 0.25 (including cases where respondents gave identical answers, producing near-zero variance). After rerunning PLS and rechecking HTMT, the remaining failure is traced to particular indicator cross-loadings between construct pairs (first EC–FP, then EC–LC). Indicators are removed one at a time based on loading-difference thresholds (around 0.10) and whether an item loads strongly on both its own construct and the competing construct. The process ends when HTMT no longer flags the construct pairs and outer loadings remain substantial.
Why does the session begin with standard deviation, and what gets deleted?
How does the workflow identify which construct pair still fails after initial cleanup?
What is the decision rule for deleting indicators using cross-loadings in EC–FP?
Why does the session delete items from the problematic construct rather than only adjusting the other construct?
What happens when EC–LC still fails even after standard deviation cleanup?
What final measurement-model constraint is emphasized besides discriminant validity metrics?
Review Questions
- When standard deviation is used as a first-pass filter, what cutoff is applied and what does near-zero standard deviation indicate about respondent behavior?
- In the EC–FP stage, how does the loading-difference threshold (around 0.10) determine whether an indicator like FP4 or FP5 should be deleted?
- After EC–LC standard deviation cleanup, what specific cross-loading evidence leads to deleting LC2 and then LC1?
Key Points
- 1
Compute standard deviation for indicators and delete items with near-zero variance or standard deviation below 0.25 to remove low-quality response patterns.
- 2
Rerun the PLS algorithm after each cleanup step and recheck discriminant validity using HTMT to identify which construct pair still fails.
- 3
Use discriminant validity cross-loadings to pinpoint the exact indicator(s) causing overlap between the failing construct pair.
- 4
Apply a loading-difference criterion around 0.10: if an indicator loads too similarly on its own factor and the competing factor, treat it as problematic.
- 5
Delete problematic indicators one at a time (e.g., FP4 then FP5 for EC–FP; LC2 then LC1 for EC–LC) and rerun after each deletion to confirm improvement.
- 6
Maintain substantial outer loadings after indicator removal so the measurement model remains valid while discriminant validity is restored.