#SmartPLS4 Series 10 - How to Solve Discriminant Validity Problems?
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start with at least four to six items per construct to avoid later loss of measurement coverage when deleting problematic indicators.
Briefing
Discriminant validity problems in SmartPLS are often fixable through a structured cleanup-and-revision workflow: tighten the measurement model before blaming the theory. The first move is preventive—design the survey so each construct has enough indicators and the items clearly belong to only one construct. A practical rule given here is to start with at least four to six items per construct, because items may need to be deleted later when they fail to load or show problematic cross-loadings. Item wording also matters: statements should be easy to understand and should avoid overlap across constructs. When two constructs’ items are too similar, respondents tend to answer in the same way, which shows up as high correlations and cross-loading—classic warning signs of discriminant validity failure.
Once data are collected, the workflow shifts to diagnostics and targeted model changes. Data cleaning comes first to rule out problematic respondents or response patterns. A specific red flag is low or near-zero variability: if the standard deviation of responses is below 0.25, the likelihood of misconduct or essentially identical responding across items rises, which can inflate discriminant validity metrics and distort conclusions. After cleaning, the next step is to inspect cross-loadings and remove the specific items that violate the measurement separation rule.
The transcript provides a concrete decision logic using the Fornell–Larcker criterion and cross-loading differences. Fornell–Larcker comparisons between two constructs should show that the diagonal (within-construct) relationship dominates the off-diagonal (between-construct) relationship. If the between-construct value is very high—described as above 0.90—discriminant validity is in trouble. Even then, cross-loadings determine which indicators to delete. The rule of thumb: if an item loads strongly on another construct and the difference between its loading on its own construct and its loading on the competing construct is less than 0.10, that item should be removed. An example given uses a cross-loading scenario where an item’s loading on its own factor is only slightly higher than its loading on another factor (e.g., a difference around 0.072), triggering deletion.
If problems persist after item removal, the guidance escalates. Check for low loadings—values under 0.5 (or 0.4 in the transcript’s phrasing) suggest the indicator is too weak to represent its construct reliably. When discriminant validity still fails, options include collecting more data to stabilize estimates. If very high correlations remain—such as an HTMT ratio that is very high or shared variance that is very high—there may be no choice but to combine constructs, particularly when the constructs are multi-dimensional but conceptually represent the same higher-order idea. As a further fallback, dropping collinear independent variables that fail discriminant validity can help the model converge on clearer measurement separation. The session ends by previewing a move to assessing a complex model using all L.O.C.s, where lower-order constructs are included together for evaluation.
Cornell Notes
The session lays out a practical method for fixing discriminant validity problems in SmartPLS: prevent them in the questionnaire, then diagnose and repair them in the measurement model. Before collecting data, each construct should have at least four to six items, items should be easy to understand, and statements should not overlap across constructs to avoid high correlations and cross-loadings. After collecting data, clean the dataset and check response variability—standard deviation below 0.25 is treated as a serious warning sign. Then use cross-loadings and the Fornell–Larcker logic: if an item’s loading on its own construct is not at least 0.10 higher than its loading on another construct, remove that item. If issues persist, consider low loadings, more data, combining constructs when correlations/shared variance are extremely high, or dropping collinear variables.
What should be done before collecting data to reduce discriminant validity failures?
How does response cleaning relate to discriminant validity diagnostics?
How do Fornell–Larcker values and cross-loadings jointly guide item deletion?
What thresholds are used for cross-loading differences and low loadings?
What if item removal and checks don’t fix discriminant validity?
What does “all the LOCs” imply for the next modeling step?
Review Questions
- If two constructs have a Fornell–Larcker between-construct value above 0.90, what cross-loading rule determines which indicators to delete?
- Why does the transcript treat response standard deviation below 0.25 as a major concern before trusting discriminant validity results?
- When would combining constructs be considered an acceptable remedy for discriminant validity problems, according to the guidance given?
Key Points
- 1
Start with at least four to six items per construct to avoid later loss of measurement coverage when deleting problematic indicators.
- 2
Prevent discriminant validity issues by writing item statements that do not overlap across constructs and are easy for respondents to interpret consistently.
- 3
Clean the dataset and check response standard deviation; values below 0.25 suggest suspiciously uniform responding that can distort validity checks.
- 4
Use Fornell–Larcker comparisons to flag construct pairs with very high between-construct values (notably above 0.90) as likely discriminant validity failures.
- 5
Inspect cross-loadings and delete any item where its loading on its own construct is less than 0.10 higher than its loading on another construct.
- 6
If discriminant validity still fails, check for low loadings (below 0.5/0.4), then consider collecting more data or combining constructs when HTMT/shared variance are extremely high.
- 7
As a last resort, drop collinear independent variables that continue to show insufficient discriminant validity in the model.