Get AI summaries of any video or article — Sign up free
Reflective-Reflective HOC using #SmartPLS4: Reliability, Validity, and Hypotheses Testing thumbnail

Reflective-Reflective HOC using #SmartPLS4: Reliability, Validity, and Hypotheses Testing

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Validate each lower-order reflective dimension (outer loadings, reliability > 0.70, convergent validity > 0.50) before constructing the higher-order construct.

Briefing

Reflective–reflective higher order constructs in SmartPLS4 can be handled cleanly by validating the lower-order dimensions first, then treating those validated dimensions as “items” for the higher-order construct, and only after that running hypothesis tests. The practical payoff is that reliability, convergent validity, and discriminant validity get checked at both levels—so any mediation or structural claims rest on measurement quality rather than assumptions.

The workflow starts with a model where an independent variable is a higher order construct built from multiple reflective dimensions. In the example, “Internal Service Quality” is measured through four reflective dimensions—reliability, assurance, empathy, and responsiveness—and it influences “Organizational Performance,” with “Organizational Commitment” (or collaborative culture) acting as a mediator. Instead of combining the four dimensions into a single block immediately, each dimension is first modeled as its own lower-order construct and validated on its own.

After the lower-order constructs are specified, the next step is to run the PLS algorithm and inspect outer loadings, reliability, and validity. Items are not deleted if reliability and convergent validity are acceptable (reliability above 0.70 and convergent validity above 0.50 in the example). Discriminant validity is also checked; values slightly below 0.90 are treated as acceptable when the constructs belong to the same higher-order structure. Once all four lower-order dimensions meet the criteria, the analysis moves up one level.

To validate the higher-order construct, SmartPLS4 is used to create a higher-order reflective–reflective construct by exporting scores (creating a new model/duplicate) and then re-specifying the measurement model. The four validated lower-order constructs are removed from the structural links and reintroduced as the indicators of the higher-order construct “Internal Service Quality.” In this reflective–reflective setup, the arrows point in the correct direction for a reflective–reflective specification, and the PLS algorithm is run again.

At this stage, the higher-order construct’s measurement quality is assessed using the same reliability and validity logic. Because the lower-order constructs were already validated, the analysis focuses reporting on the higher-order construct’s reliability, convergent validity, and discriminant validity rather than re-listing every lower-order result.

With measurement quality confirmed at both levels, hypothesis testing proceeds via bootstrapping. The example uses 10,000 bootstrap samples with bias-corrected confidence intervals (One-Tailed, since the direction of relationships is known). Path coefficients are checked for significance, and mediation is evaluated through indirect effects. Significant path coefficients and significant indirect effects indicate that the hypothesized relationships—including mediation—are supported. The result is a step-by-step, measurement-first approach for reflective–reflective higher order constructs in SmartPLS4 that culminates in reliable hypothesis testing.

Cornell Notes

Reflective–reflective higher order constructs in SmartPLS4 are best handled in three stages: validate lower-order dimensions, validate the higher-order construct built from those dimensions, then test hypotheses. First, each dimension (e.g., reliability, assurance, empathy, responsiveness) is modeled as a lower-order reflective construct and assessed using outer loadings, reliability (above 0.70), convergent validity (above 0.50), and discriminant validity (with some flexibility when dimensions belong to the same higher-order construct). Next, the validated lower-order constructs are treated as indicators for the higher-order construct “Internal Service Quality,” and reliability/validity are checked again at the higher-order level. Finally, bootstrapping (10,000; bias-corrected) tests path significance and mediation via indirect effects.

Why validate lower-order dimensions before building the higher-order construct in a reflective–reflective model?

Because the higher-order construct is formed from the lower-order dimensions, measurement problems at the dimension level can contaminate the higher-order construct. In the example, reliability, assurance, empathy, and responsiveness are validated first using outer loadings plus reliability (target > 0.70) and convergent validity (target > 0.50). Discriminant validity is also checked; values slightly under 0.90 can be acceptable when the dimensions are part of the same higher-order construct. Only after these checks pass does the analysis treat the dimensions as indicators for the higher-order construct.

How does SmartPLS4 treat validated lower-order constructs when forming a reflective–reflective higher-order construct?

After lower-order validation, the four lower-order constructs are re-specified as the indicators of the higher-order construct. Practically, the workflow duplicates the model and exports/creates higher-order construct scores, then removes the original four constructs from the structural portion and models “Internal Service Quality” as the higher-order latent variable with the four dimensions acting as its indicators. Because it’s reflective–reflective, the arrows are set so the direction matches a reflective–reflective measurement setup.

What validity checks matter at the higher-order level, and what gets reported?

At the higher-order level, the analysis again checks reliability and validity—outer loadings, reliability (again using the same logic), convergent validity, and discriminant validity. Since the lower-order constructs were already validated, the workflow emphasizes reporting the higher-order construct’s reliability/validity results rather than repeating every lower-order statistic.

How are hypotheses and mediation tested once measurement quality is confirmed?

Bootstrapping is used to test significance. The example runs bootstrapping with 10,000 samples and bias-corrected intervals, using a one-tailed approach because the direction of relationships is known. Path coefficients are examined for significance, and mediation is assessed through specific indirect effects; significant indirect effects indicate that the mediator (e.g., organizational commitment/collaborative culture) carries a meaningful portion of the effect from internal service quality to organizational performance.

What does “slightly low discriminant validity” mean in this workflow?

The example notes discriminant validity values slightly below 0.90 can be acceptable when the compared constructs belong to the same higher-order construct. That matters because dimensions within the same higher-order construct are expected to be related; the discriminant validity threshold is interpreted in that structural context rather than treated as a universal hard stop.

Review Questions

  1. In a reflective–reflective higher order model, what are the three major stages of analysis in SmartPLS4, and what is checked at each stage?
  2. Why might discriminant validity be interpreted differently for dimensions that belong to the same higher-order construct?
  3. How do bootstrapping settings (10,000 samples, bias-corrected, one-tailed) influence how mediation and path significance are evaluated?

Key Points

  1. 1

    Validate each lower-order reflective dimension (outer loadings, reliability > 0.70, convergent validity > 0.50) before constructing the higher-order construct.

  2. 2

    Treat the validated lower-order constructs as indicators for the higher-order reflective–reflective latent variable, ensuring arrow direction matches the reflective–reflective specification.

  3. 3

    Run the PLS algorithm again after re-specifying the higher-order construct to confirm higher-order reliability and validity.

  4. 4

    Report measurement results at the higher-order level once lower-order constructs are already validated, avoiding redundant reporting.

  5. 5

    Use bootstrapping with 10,000 samples and bias-corrected intervals to test path coefficients and mediation via specific indirect effects.

  6. 6

    Significant path coefficients and significant indirect effects together support both direct hypotheses and mediation hypotheses.

Highlights

The measurement-first sequence—lower-order validation, then higher-order validation, then bootstrapped hypothesis testing—prevents weak dimensions from undermining higher-order conclusions.
In reflective–reflective higher order modeling, the four validated dimensions become the indicators of the higher-order construct, with arrow direction set to match reflective–reflective logic.
Mediation is confirmed through significant specific indirect effects, not just by observing significant direct paths.

Topics

Mentioned