Get AI summaries of any video or article — Sign up free
#SmartPLS4 Webinar Day 3: Higher Order Construct Assessment thumbnail

#SmartPLS4 Webinar Day 3: Higher Order Construct Assessment

Research With Fawad·
6 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use arrow direction and the interchangeability test to distinguish reflective (items interchangeable) from formative (items not interchangeable) constructs.

Briefing

Higher-order constructs in PLS-SEM can be treated as either reflective or formative—but the choice changes how researchers validate the measurement model and how they interpret structural results. The session lays out practical decision rules for distinguishing reflective versus formative constructs, then walks through a step-by-step workflow for validating higher-order constructs (hierarchical component models) and testing hypotheses, including moderation effects.

The core differentiation starts with arrow direction in the measurement model: when indicators point toward the latent variable, the construct is reflective; when indicators point toward the latent variable, the construct is formative. The session also adds a functional test: if removing one item from a multi-item construct does not meaningfully change the conceptualization and operationalization, the items behave as interchangeable—typical of reflective measurement. If items cannot substitute for one another, the construct is formative. Concrete examples are used to contrast “diet” (reflective: other items can cover for one missing item) with “health” (formative: missing elements like exercise or sleep can’t be compensated by the remaining items).

Higher-order constructs—common in hierarchical component modeling—often appear when concepts are too abstract to measure directly and instead are represented through subdimensions. Servant leadership is used as an example of a higher-order construct measured through seven lower-order dimensions (behaving ethically, emotional healing, empowerment, pioneering, relationship building, wisdom, and development). The session emphasizes that higher-order constructs can be reflective or formative depending on whether subdimensions are replaceable or distinct. A key nuance: even if one lower-order dimension is removed due to weak loadings, the existence of the higher-order construct may still remain, which makes the reflective-versus-formative classification more interpretive and sometimes more subjective. To reduce subjectivity, researchers can (1) use theoretical logic about whether dimensions are interchangeable, and (2) check how the same construct was operationalized in prior studies.

Once the reflective-versus-formative decision is made, validation proceeds in two layers: first validate all lower-order constructs (reliability, factor loadings, and validity), then validate the higher-order construct itself. For reflective–reflective higher-order models, the workflow stays similar: run PLS-SEM, check outer loadings, composite reliability, and validity, then address discriminant validity issues (e.g., when the higher-order construct correlates too strongly with another construct). Validation of the higher-order construct relies on SmartPLS latent variable scores: create a new dataset using latent variable scores, re-specify the higher-order model where the lower-order constructs become “items,” and re-run measurement validation.

Structural testing follows after measurement validation. Moderation is handled in the structural model (not by building moderators into the measurement model). The session reports a case where one moderator (role ambiguity) weakens the relationship between collaborative culture and organizational performance as expected, while another (role conflict) strengthens it—prompting a discussion strategy that treats unexpected results as explainable rather than as errors to be hidden. The session argues that ethical research practice requires keeping the data and building logical explanations for counterintuitive findings.

The session then extends the workflow to reflective–formative higher-order constructs. Here, validation focuses on formative-specific diagnostics: multicollinearity via VIF (values below 5 are treated as acceptable), significance of outer weights via bootstrapping, and outer loadings (used to judge whether indicators contribute meaningfully). For formative higher-order constructs, redundancy analysis is also used when a global measure exists, linking the formative construct to an overall item to establish convergent validity.

Finally, model evaluation metrics are covered: R² for in-sample explanatory power, f² for effect size (small/medium/large thresholds), and Q² for predictive relevance (values above zero indicate predictive relevance; higher thresholds indicate stronger prediction). The session concludes with guidance on reporting these statistics alongside hypothesis results and previews multi-group analysis for later sessions.

Cornell Notes

Higher-order constructs in PLS-SEM can be reflective or formative, and that classification determines how measurement models are validated. Reflective constructs rely on interchangeable indicators (supported by arrow direction and the idea that removing an item doesn’t break the construct), while formative constructs treat subdimensions as distinct components that cannot substitute for one another. For reflective–reflective higher-order models, researchers validate lower-order constructs first, then validate the higher-order construct using SmartPLS latent variable scores by re-specifying the higher-order model where lower-order constructs become “items.” For reflective–formative higher-order models, validation shifts to formative diagnostics: check VIF for multicollinearity, test outer weights with bootstrapping, use outer loadings to judge indicator contribution, and run redundancy analysis when a global measure exists. After measurement validation, structural paths (including moderation) are bootstrapped and reported with R², f², and Q².

What practical rules help decide whether a construct is reflective or formative in PLS-SEM?

Arrow direction is the first cue: indicators pointing toward the latent variable indicate reflective measurement, while indicators pointing toward the latent variable indicate formative measurement. A second, functional rule checks interchangeability: if deleting one item from a multi-item construct does not significantly change the conceptualization and operationalization, the items can be treated as interchangeable—typical of reflective constructs. If items cannot cover for each other (e.g., “health” missing exercise or sleep cannot be compensated by the remaining items), the construct behaves as formative.

Why do higher-order constructs require validation at two levels in SmartPLS?

Lower-order constructs must be validated first (reliability, factor loadings, and validity). Then the higher-order construct itself must be validated, because it is also a latent construct with its own measurement properties. The session stresses that validating only the lower-order constructs is not enough; the higher-order construct must pass the same measurement checks (or formative-specific checks, depending on the model type).

How does SmartPLS latent variable scoring support higher-order construct validation?

After validating lower-order constructs, SmartPLS can generate latent variable scores for each respondent for each lower-order construct. Researchers then create a new dataset using these latent variable scores as inputs. In the higher-order measurement model, the lower-order constructs become the “items” for the higher-order construct, and the higher-order construct is validated using outer loadings/weights, reliability, and validity (plus discriminant validity checks when needed).

What changes when moving from reflective–reflective to reflective–formative higher-order constructs?

The session keeps the initial lower-order validation step the same, but changes the higher-order validation logic. For formative higher-order constructs, researchers assess multicollinearity using VIF (acceptable when VIF < 5), test outer weights for significance via bootstrapping, and evaluate outer loadings to judge indicator contribution. If a global measure exists, redundancy analysis is used to confirm convergent validity by checking that the formative construct relates strongly to the global item.

How should unexpected moderation results be handled in the write-up?

The session argues against deleting moderators or altering results after data collection. Instead, researchers should keep the findings and build logical explanations for why the unexpected direction or strength occurred. The example given shows role ambiguity weakening the collaborative culture → organizational performance link as expected, while role conflict strengthens it; the discussion rationale is that role conflict may push employees to communicate and collaborate to reduce conflict, which then improves performance.

What do R², f², and Q² mean for evaluating a PLS-SEM model?

R² measures in-sample explanatory power: how much variance in endogenous constructs is explained by predictors. f² assesses effect size: how much R² changes when a specific predictor is removed (small/medium/large thresholds are given). Q² measures predictive relevance: values above zero indicate predictive relevance, with higher thresholds indicating stronger predictive ability. The session recommends reporting these statistics alongside hypothesis results.

Review Questions

  1. When would removing an item from a construct be considered evidence for reflective measurement, and when would it point toward formative measurement?
  2. In a reflective–reflective higher-order model, what dataset transformation is used to validate the higher-order construct in SmartPLS?
  3. For a reflective–formative higher-order construct, which diagnostics address multicollinearity and indicator contribution, and what additional test is used when a global measure exists?

Key Points

  1. 1

    Use arrow direction and the interchangeability test to distinguish reflective (items interchangeable) from formative (items not interchangeable) constructs.

  2. 2

    Higher-order constructs require validating both lower-order constructs and the higher-order construct itself; measurement validation cannot stop at the subdimensions.

  3. 3

    For reflective–reflective higher-order models, validate the higher-order construct using SmartPLS latent variable scores by re-specifying lower-order constructs as items.

  4. 4

    For reflective–formative higher-order models, validate using formative diagnostics: VIF for multicollinearity, bootstrapped significance of outer weights, and outer loadings to confirm indicator contribution.

  5. 5

    Address discriminant validity problems by checking correlations and cross-loadings/standard deviations, then revising indicators if needed rather than ignoring the issue.

  6. 6

    Moderation effects belong in the structural model; unexpected moderation directions should be explained logically in discussion, not removed to “fix” results.

  7. 7

    Report R² (explanatory power), f² (effect size), and Q² (predictive relevance) alongside hypothesis testing outcomes.

Highlights

Reflective vs formative classification hinges on both arrow direction and whether items can substitute for each other—diet behaves reflectively, while health behaves formatively.
Higher-order construct validation in SmartPLS relies on latent variable scores: lower-order constructs become “items” for the higher-order measurement model.
Formative higher-order validation requires VIF checks (VIF < 5 as acceptable), bootstrapped outer weight significance, and redundancy analysis when a global measure exists.
Unexpected moderation effects (e.g., role conflict strengthening a relationship) should be retained and explained with logical mechanisms rather than corrected by deleting variables.
Model quality is evaluated with R², f², and Q², with Q² above zero indicating predictive relevance.

Topics

Mentioned

  • PLS-SEM
  • PLS
  • SCML
  • XCM
  • IV
  • DV
  • CSR
  • CSR
  • IM
  • OC
  • CC
  • RC
  • OP
  • VIF
  • Q Square
  • R square
  • f square
  • LV
  • PLS predict