Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series 16 - How to Assess Reflective-Reflective Higher Order Construct? thumbnail

#SmartPLS4 Series 16 - How to Assess Reflective-Reflective Higher Order Construct?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Validate all lower-order reflective constructs first by checking factor loadings, reliability, and validity before moving up the hierarchy.

Briefing

Validating a reflective–reflective higher order construct in SmartPLS hinges on a two-stage workflow: first validate the lower-order dimensions, then generate latent variable scores for those dimensions and use the scores as indicators for the higher-order construct. In the example, “internal service quality” is treated as a reflective–reflective higher order construct built from lower-order dimensions—reliability assurance, empathy, and responsiveness—each measured by multiple items. The practical payoff is that researchers can keep the same reliability/validity logic used for lower-order reflective constructs while correctly modeling the higher-level abstraction.

The process starts by assessing the measurement model for all lower-order constructs. With a hierarchical component model in mind, the model includes both lower-order constructs (the concrete dimensions) and higher-order constructs (the more general constructs). After running the PLS algorithm, the workflow checks factor loadings, reliability, and validity for the lower-order constructs. Once those dimensions are confirmed, the next step is specific to reflective–reflective higher order constructs: scores must be created for each lower-order dimension so they can serve as single indicators at the higher level.

SmartPLS implements this through a disjoint two-stage approach. In stage one, the analyst runs the PLS algorithm and then uses the “latent variable scores” report to export dimension scores (e.g., reliability assurance, empathy, responsiveness). These scores are copied into a new dataset aligned with the original respondents. The key move is transforming each multi-item lower-order dimension into a single latent variable score—effectively collapsing the item set into one indicator per dimension.

Stage two rebuilds the measurement model for the higher-order construct using those latent variable scores as indicators. For “internal service quality,” the model is updated so that the four (or three, depending on the dimension structure) latent variable score indicators point into the higher-order construct, preserving the reflective–reflective arrow direction. The analyst then runs the PLS algorithm again and evaluates measurement quality at the higher level using the same outputs as for lower-order reflective constructs: outer loadings, reliability metrics (including alpha), and validity checks such as discriminant validity (including within-construct variance exceeding shared variance). If these checks pass, “internal service quality” is considered properly measured as a reflective–reflective higher order construct.

The transcript also clarifies how higher-order constructs differ by type. Reflective–formative higher order constructs behave differently because the lower-order components “form” the higher-level construct; removing a component can eliminate the higher-order concept. Reflective–reflective higher order constructs, by contrast, keep the higher-order construct intact even if a lower-order dimension is removed, reflecting the interchangeable nature of the dimensions. Although reflective–formative validation is deferred to later sessions, the immediate guidance is clear: reflective–reflective higher-order constructs should be validated with the disjoint two-stage approach, using latent variable scores as indicators and then applying standard reflective measurement diagnostics at the higher level.

Cornell Notes

Reflective–reflective higher order constructs in SmartPLS are validated using a disjoint two-stage approach. First, the lower-order dimensions are validated as reflective constructs by checking factor loadings, reliability, and validity. Next, latent variable scores for each validated lower-order dimension are generated (via the latent variable scores report) and exported into a new dataset. In the second stage, those scores become the indicators of the higher-order construct, and the higher-order measurement model is assessed using the same reflective diagnostics: outer loadings, reliability (e.g., alpha), and validity including discriminant validity (within-construct variance greater than shared variance). This keeps the measurement logic consistent while correctly modeling the hierarchy.

Why does reflective–reflective higher order validation require a two-stage workflow in SmartPLS?

Because the higher-order construct is measured at a higher level of abstraction, while its lower-order dimensions are originally measured with multiple items. The disjoint two-stage approach converts each validated lower-order dimension into a single latent variable score, then uses those scores as indicators for the higher-order construct. This ensures the higher-level model can be assessed with reflective measurement checks (outer loadings, reliability, validity) using indicators that represent the lower-order dimensions.

How are latent variable scores generated and used for the higher-order construct?

After running the PLS algorithm in stage one, the analyst opens the “latent variable scores” report, exports/copies the scores to CSV, and appends them to the dataset used for modeling. The original multi-item indicators for each lower-order dimension are replaced by the corresponding latent variable score columns. In stage two, those score columns are added as indicators pointing to the higher-order construct (e.g., reliability assurance, empathy, responsiveness pointing into internal service quality).

What does the disjoint two-stage approach mean in practice?

“Disjoint” means the analyst does not reuse latent variable scores for other constructs beyond the targeted higher-order construct’s dimensions. For the reflective–reflective higher-order construct, the lower-order dimension scores are used as indicators at the higher level, while other parts of the model remain built at the indicator level as usual. The transcript emphasizes duplicating the measurement model, deleting unnecessary parts, and constructing a reflective–reflective higher-order measurement block using the latent variable scores.

Which measurement-model checks are performed for the higher-order reflective–reflective construct?

The same reflective diagnostics used for lower-order reflective constructs: outer loadings are inspected for adequacy, reliability is checked (including alpha and related reliability outputs), and validity is evaluated. Discriminant validity is specifically assessed, with the transcript noting that within-construct variance for the higher-order construct should be higher than shared variance.

How does reflective–formative differ from reflective–reflective in higher-order constructs?

In reflective–formative higher order constructs, the lower-order components form the higher-order construct—so removing a component can eliminate the higher-order concept. In reflective–reflective higher order constructs, the lower-order dimensions are interchangeable and the higher-order construct remains identifiable even if one dimension is removed. The transcript uses internal marketing as the reflective–formative example and internal service quality as the reflective–reflective example, reserving reflective–formative validation for later videos.

Review Questions

  1. What steps are required to convert validated lower-order dimensions into indicators for a reflective–reflective higher order construct in SmartPLS?
  2. Which specific outputs (e.g., outer loadings, reliability, discriminant validity) must be checked after building the higher-order measurement model with latent variable scores?
  3. How would the validation logic change if the higher-order construct were reflective–formative instead of reflective–reflective?

Key Points

  1. 1

    Validate all lower-order reflective constructs first by checking factor loadings, reliability, and validity before moving up the hierarchy.

  2. 2

    For reflective–reflective higher order constructs, use a disjoint two-stage approach rather than directly modeling items at the higher level.

  3. 3

    Generate latent variable scores for each validated lower-order dimension and export them into a dataset aligned with the original respondents.

  4. 4

    Rebuild the measurement model so the latent variable scores serve as indicators pointing into the higher-order construct (preserving reflective–reflective direction).

  5. 5

    After stage two, assess the higher-order construct with reflective measurement diagnostics: outer loadings, reliability (including alpha), and validity such as discriminant validity.

  6. 6

    Discriminant validity should show within-construct variance exceeding shared variance for the higher-order construct.

  7. 7

    Reflective–formative higher order constructs require different validation logic because the lower-order components form the higher-level construct.

Highlights

Reflective–reflective higher order constructs are validated by generating latent variable scores for lower-order dimensions and using those scores as indicators at the higher level.
The disjoint two-stage approach keeps the reflective measurement-model assessment consistent: outer loadings, reliability, and discriminant validity are checked again for the higher-order construct.
In the example, internal service quality is modeled as reflective–reflective using reliability assurance, empathy, and responsiveness as lower-order dimensions.
Discriminant validity is treated as a pass/fail criterion at the higher-order level, with within-construct variance needing to exceed shared variance.

Topics

Mentioned

  • PLS
  • IV
  • HCM
  • RQ
  • LOA