Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series 10 - How to Solve Discriminant Validity Problems? thumbnail

#SmartPLS4 Series 10 - How to Solve Discriminant Validity Problems?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start with at least four to six items per construct to avoid later loss of measurement coverage when deleting problematic indicators.

Briefing

Discriminant validity problems in SmartPLS are often fixable through a structured cleanup-and-revision workflow: tighten the measurement model before blaming the theory. The first move is preventive—design the survey so each construct has enough indicators and the items clearly belong to only one construct. A practical rule given here is to start with at least four to six items per construct, because items may need to be deleted later when they fail to load or show problematic cross-loadings. Item wording also matters: statements should be easy to understand and should avoid overlap across constructs. When two constructs’ items are too similar, respondents tend to answer in the same way, which shows up as high correlations and cross-loading—classic warning signs of discriminant validity failure.

Once data are collected, the workflow shifts to diagnostics and targeted model changes. Data cleaning comes first to rule out problematic respondents or response patterns. A specific red flag is low or near-zero variability: if the standard deviation of responses is below 0.25, the likelihood of misconduct or essentially identical responding across items rises, which can inflate discriminant validity metrics and distort conclusions. After cleaning, the next step is to inspect cross-loadings and remove the specific items that violate the measurement separation rule.

The transcript provides a concrete decision logic using the Fornell–Larcker criterion and cross-loading differences. Fornell–Larcker comparisons between two constructs should show that the diagonal (within-construct) relationship dominates the off-diagonal (between-construct) relationship. If the between-construct value is very high—described as above 0.90—discriminant validity is in trouble. Even then, cross-loadings determine which indicators to delete. The rule of thumb: if an item loads strongly on another construct and the difference between its loading on its own construct and its loading on the competing construct is less than 0.10, that item should be removed. An example given uses a cross-loading scenario where an item’s loading on its own factor is only slightly higher than its loading on another factor (e.g., a difference around 0.072), triggering deletion.

If problems persist after item removal, the guidance escalates. Check for low loadings—values under 0.5 (or 0.4 in the transcript’s phrasing) suggest the indicator is too weak to represent its construct reliably. When discriminant validity still fails, options include collecting more data to stabilize estimates. If very high correlations remain—such as an HTMT ratio that is very high or shared variance that is very high—there may be no choice but to combine constructs, particularly when the constructs are multi-dimensional but conceptually represent the same higher-order idea. As a further fallback, dropping collinear independent variables that fail discriminant validity can help the model converge on clearer measurement separation. The session ends by previewing a move to assessing a complex model using all L.O.C.s, where lower-order constructs are included together for evaluation.

Cornell Notes

The session lays out a practical method for fixing discriminant validity problems in SmartPLS: prevent them in the questionnaire, then diagnose and repair them in the measurement model. Before collecting data, each construct should have at least four to six items, items should be easy to understand, and statements should not overlap across constructs to avoid high correlations and cross-loadings. After collecting data, clean the dataset and check response variability—standard deviation below 0.25 is treated as a serious warning sign. Then use cross-loadings and the Fornell–Larcker logic: if an item’s loading on its own construct is not at least 0.10 higher than its loading on another construct, remove that item. If issues persist, consider low loadings, more data, combining constructs when correlations/shared variance are extremely high, or dropping collinear variables.

What should be done before collecting data to reduce discriminant validity failures?

Use adequate indicators per construct—at least four to six items—because items may later be deleted if they fail to load or show cross-loading. Ensure item statements are easy to understand and avoid overlap in wording between different constructs. Overlapping statements tend to produce similar response patterns, which increases correlations and cross-loadings, signaling discriminant validity problems.

How does response cleaning relate to discriminant validity diagnostics?

Cleaning helps detect problematic respondents or response misconduct. A specific check mentioned is standard deviation: if the standard deviation of responses is less than 0.25, responses are too similar across items, raising the probability of misconduct and making discriminant validity results unreliable. The transcript notes that a cleaning approach is available via a linked reference (not reproduced here).

How do Fornell–Larcker values and cross-loadings jointly guide item deletion?

Fornell–Larcker comparisons between two constructs should show the within-construct relationship dominating the between-construct relationship. If the between-construct value is very high (described as over 0.90), discriminant validity is likely failing. Then cross-loadings identify the offending indicators: remove an item when its loading on its own construct is not at least 0.10 higher than its loading on the competing construct (difference < 0.10).

What thresholds are used for cross-loading differences and low loadings?

For cross-loadings, the key rule is the difference between an item’s loading on its own construct and its loading on another construct: if that difference is less than 0.10, the item should be removed. For low loadings, the transcript flags values below 0.5 (and also mentions 0.4) as a sign of measurement weakness that can contribute to discriminant validity issues.

What if item removal and checks don’t fix discriminant validity?

Escalate to model-level remedies. Collect more data to stabilize estimates. If discriminant validity still fails due to extremely high correlation—such as a very high HTMT ratio or very high shared variance—the guidance is to combine constructs when they represent the same underlying concept (especially for multi-dimensional constructs). Another fallback is dropping one or more collinear independent variables that demonstrate insufficient discriminant validity.

What does “all the LOCs” imply for the next modeling step?

The transcript previews a shift to assessing a complex model by including all lower-order constructs (LOCs) together in the model. Instead of evaluating constructs in a simpler, more separated way, the lower-order constructs are assessed collectively as part of the complex structure.

Review Questions

  1. If two constructs have a Fornell–Larcker between-construct value above 0.90, what cross-loading rule determines which indicators to delete?
  2. Why does the transcript treat response standard deviation below 0.25 as a major concern before trusting discriminant validity results?
  3. When would combining constructs be considered an acceptable remedy for discriminant validity problems, according to the guidance given?

Key Points

  1. 1

    Start with at least four to six items per construct to avoid later loss of measurement coverage when deleting problematic indicators.

  2. 2

    Prevent discriminant validity issues by writing item statements that do not overlap across constructs and are easy for respondents to interpret consistently.

  3. 3

    Clean the dataset and check response standard deviation; values below 0.25 suggest suspiciously uniform responding that can distort validity checks.

  4. 4

    Use Fornell–Larcker comparisons to flag construct pairs with very high between-construct values (notably above 0.90) as likely discriminant validity failures.

  5. 5

    Inspect cross-loadings and delete any item where its loading on its own construct is less than 0.10 higher than its loading on another construct.

  6. 6

    If discriminant validity still fails, check for low loadings (below 0.5/0.4), then consider collecting more data or combining constructs when HTMT/shared variance are extremely high.

  7. 7

    As a last resort, drop collinear independent variables that continue to show insufficient discriminant validity in the model.

Highlights

A practical item-deletion rule is central: remove indicators when the loading gap between the intended construct and a competing construct is under 0.10.
Standard deviation below 0.25 is treated as a strong warning sign for response misconduct or overly similar responding.
When HTMT ratios or shared variance are extremely high, combining constructs may be the only workable fix—especially for multi-dimensional constructs representing the same idea.
The workflow moves from prevention (item design) to diagnosis (cleaning, Fornell–Larcker, cross-loadings) to escalation (more data, combine constructs, drop collinear variables).

Mentioned

  • CM
  • HTMT
  • LOCs