Get AI summaries of any video or article — Sign up free
How to Solve Discriminant Validity Issues in SmartPLS using Standard Deviation Function thumbnail

How to Solve Discriminant Validity Issues in SmartPLS using Standard Deviation Function

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Run the PLS algorithm and check HTMT; unusually high HTMT values can signal discriminant validity problems that won’t always be fixed by item deletion alone.

Briefing

Discriminant validity problems in SmartPLS—especially when HTMT values stay unusually high—can persist even after checking cross-loadings or deleting items. A practical fix is to audit the raw responses for “straight-lining” behavior: cases where respondents show near-zero variability across items, suggesting they didn’t answer thoughtfully. When those low-variation records remain in the dataset, similarity between constructs can inflate HTMT and keep discriminant validity from improving.

The workflow starts with running the PLS algorithm and then inspecting HTMT. In the example, HTMT values were high (around 0.921, 0.926, and 0.934), indicating discriminant validity issues. Cross-loadings were also checked, but simply removing problematic items didn’t fully resolve the problem—differences sometimes exceeded the common 0.10 threshold, yet the discriminant validity concern remained.

To investigate whether the data itself was undermining the measurement model, the analysis shifts to response quality using a standard deviation rule. The method checks whether respondents have standard deviation below the required limit (the transcript references a threshold of greater than 0.25). The focus lands on specific constructs where the issue concentrates—LG, HC, RC, and SC—along with their associated items. The dataset is augmented with a standard deviation column (computed across the relevant items), then sorted from smallest to largest.

The key diagnostic result is stark: many records show no meaningful standard deviation (values like 1, 1, 1, 1, 1, 1, and a repeated 0.555 pattern), indicating respondents effectively marked responses without variation. These are treated as invalid response patterns and are deleted. The transcript notes that deleting a portion of the sample—potentially up to a cutoff such as 51 responses in the example—can improve measurement quality, though it also flags that higher initial sample sizes are helpful to absorb deletions.

After removing the low-variation respondents, the model is re-run and HTMT is recalculated. The HTMT values drop substantially and move close to the acceptable range (near 0.90 in the example). Cross-loadings are reviewed again, and while some differences still exceed 0.10 (e.g., HC vs SC), the pattern is no longer severe enough to block discriminant validity.

Finally, bootstrapping is used to confirm the HTMT outcome under sampling variability. With bootstrapping (500 resamples in the example), the corrected HTMT results land around 0.9996, with no intermediate value between 0.90 and 1. That outcome supports the conclusion that discriminant validity is established. The takeaway is that discriminant validity fixes in SmartPLS aren’t only about model specification; cleaning low-variance respondents using a standard deviation function can be decisive when HTMT remains stubbornly high.

Cornell Notes

High HTMT values in SmartPLS can remain even after checking cross-loadings and deleting items. A reliable remedy is to inspect the raw data for respondents who show near-zero variability across items, which can inflate similarity between constructs. Using a standard deviation function, the analysis computes standard deviation per respondent for the problematic constructs (LG, HC, RC, SC) and applies a threshold (standard deviation should be > 0.25). Records with no meaningful standard deviation are deleted, the model is re-estimated, and HTMT is recalculated. After cleaning, HTMT drops close to the acceptable range, and bootstrapping (e.g., 500 samples) confirms discriminant validity.

Why can discriminant validity issues persist even when cross-loadings look acceptable or items are removed?

Because the measurement problem may come from the data quality, not only from the model. If respondents “straight-line” their answers—producing near-zero standard deviation across items—construct indicators can become artificially similar. That can keep HTMT values high even after item-level adjustments.

How does the standard deviation method diagnose problematic responses?

It computes the standard deviation of each respondent’s answers across items tied to specific constructs. The transcript uses a rule that standard deviation should be greater than 0.25. Respondents with extremely low or effectively zero standard deviation (e.g., repeated values like 1s or a repeated 0.555 pattern) are flagged as invalid response patterns.

Which constructs were targeted in the example when checking standard deviation?

The issue concentrates on four constructs: LG, HC, RC, and SC (with items under LG and the other constructs included in the standard deviation calculation). The analysis checks standard deviation for these constructs to find respondents who did not provide true variation in responses.

What happens after deleting low-standard-deviation respondents?

The dataset is saved and re-imported, the PLS algorithm is run again, and HTMT is recalculated. In the example, HTMT decreases significantly—from values around 0.921–0.934 down to values close to 0.90—indicating improved discriminant validity.

How is the final discriminant validity decision validated after cleaning?

Bootstrapping is run for HTMT (500 resamples in the transcript). The corrected bootstrap HTMT results move into an acceptable pattern (around 0.9996 with no intermediate value between 0.90 and 1), supporting the conclusion that discriminant validity is established.

If cross-loadings still show differences above 0.10, does that automatically mean discriminant validity fails?

Not necessarily. The transcript notes that some cross-loading differences remain above 0.10 (e.g., HC vs SC), but after response-quality cleaning, the overall HTMT outcome improves and bootstrapping supports discriminant validity. The HTMT-based assessment becomes the deciding factor.

Review Questions

  1. What threshold for standard deviation is used to flag invalid respondents, and why does that matter for HTMT?
  2. Describe the sequence of steps from HTMT inspection to standard deviation calculation, respondent deletion, and re-running the model.
  3. Why might deleting items based on cross-loadings be insufficient when the dataset contains low-variance response patterns?

Key Points

  1. 1

    Run the PLS algorithm and check HTMT; unusually high HTMT values can signal discriminant validity problems that won’t always be fixed by item deletion alone.

  2. 2

    Audit response quality by computing standard deviation per respondent across indicators for the constructs involved (e.g., LG, HC, RC, SC).

  3. 3

    Use the standard deviation threshold referenced in the workflow: standard deviation should be greater than 0.25.

  4. 4

    Delete respondents with near-zero standard deviation (straight-lining), then re-import the cleaned dataset and re-run the model.

  5. 5

    Recalculate HTMT after cleaning; a substantial drop toward the acceptable range indicates the issue was driven by data quality.

  6. 6

    Confirm the HTMT result with bootstrapping (the example uses 500 resamples) to ensure discriminant validity holds under sampling variability.

  7. 7

    Maintain a sufficiently large initial sample size because deleting low-variance respondents can reduce N and may require more data to preserve statistical stability.

Highlights

HTMT can stay high even after cross-loading checks—low-variance (straight-lined) responses can be the real culprit.
A standard deviation rule (threshold > 0.25) helps identify respondents whose answers lack meaningful variability.
Deleting those low-standard-deviation records can drive HTMT down close to 0.90 in the example.
Bootstrapping HTMT after cleaning provides the final confirmation that discriminant validity is established.

Topics

Mentioned

  • PLS
  • HTMT