#SmartPLS4 Webinar Day 3: Higher Order Construct Assessment
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use arrow direction and the interchangeability test to distinguish reflective (items interchangeable) from formative (items not interchangeable) constructs.
Briefing
Higher-order constructs in PLS-SEM can be treated as either reflective or formative—but the choice changes how researchers validate the measurement model and how they interpret structural results. The session lays out practical decision rules for distinguishing reflective versus formative constructs, then walks through a step-by-step workflow for validating higher-order constructs (hierarchical component models) and testing hypotheses, including moderation effects.
The core differentiation starts with arrow direction in the measurement model: when indicators point toward the latent variable, the construct is reflective; when indicators point toward the latent variable, the construct is formative. The session also adds a functional test: if removing one item from a multi-item construct does not meaningfully change the conceptualization and operationalization, the items behave as interchangeable—typical of reflective measurement. If items cannot substitute for one another, the construct is formative. Concrete examples are used to contrast “diet” (reflective: other items can cover for one missing item) with “health” (formative: missing elements like exercise or sleep can’t be compensated by the remaining items).
Higher-order constructs—common in hierarchical component modeling—often appear when concepts are too abstract to measure directly and instead are represented through subdimensions. Servant leadership is used as an example of a higher-order construct measured through seven lower-order dimensions (behaving ethically, emotional healing, empowerment, pioneering, relationship building, wisdom, and development). The session emphasizes that higher-order constructs can be reflective or formative depending on whether subdimensions are replaceable or distinct. A key nuance: even if one lower-order dimension is removed due to weak loadings, the existence of the higher-order construct may still remain, which makes the reflective-versus-formative classification more interpretive and sometimes more subjective. To reduce subjectivity, researchers can (1) use theoretical logic about whether dimensions are interchangeable, and (2) check how the same construct was operationalized in prior studies.
Once the reflective-versus-formative decision is made, validation proceeds in two layers: first validate all lower-order constructs (reliability, factor loadings, and validity), then validate the higher-order construct itself. For reflective–reflective higher-order models, the workflow stays similar: run PLS-SEM, check outer loadings, composite reliability, and validity, then address discriminant validity issues (e.g., when the higher-order construct correlates too strongly with another construct). Validation of the higher-order construct relies on SmartPLS latent variable scores: create a new dataset using latent variable scores, re-specify the higher-order model where the lower-order constructs become “items,” and re-run measurement validation.
Structural testing follows after measurement validation. Moderation is handled in the structural model (not by building moderators into the measurement model). The session reports a case where one moderator (role ambiguity) weakens the relationship between collaborative culture and organizational performance as expected, while another (role conflict) strengthens it—prompting a discussion strategy that treats unexpected results as explainable rather than as errors to be hidden. The session argues that ethical research practice requires keeping the data and building logical explanations for counterintuitive findings.
The session then extends the workflow to reflective–formative higher-order constructs. Here, validation focuses on formative-specific diagnostics: multicollinearity via VIF (values below 5 are treated as acceptable), significance of outer weights via bootstrapping, and outer loadings (used to judge whether indicators contribute meaningfully). For formative higher-order constructs, redundancy analysis is also used when a global measure exists, linking the formative construct to an overall item to establish convergent validity.
Finally, model evaluation metrics are covered: R² for in-sample explanatory power, f² for effect size (small/medium/large thresholds), and Q² for predictive relevance (values above zero indicate predictive relevance; higher thresholds indicate stronger prediction). The session concludes with guidance on reporting these statistics alongside hypothesis results and previews multi-group analysis for later sessions.
Cornell Notes
Higher-order constructs in PLS-SEM can be reflective or formative, and that classification determines how measurement models are validated. Reflective constructs rely on interchangeable indicators (supported by arrow direction and the idea that removing an item doesn’t break the construct), while formative constructs treat subdimensions as distinct components that cannot substitute for one another. For reflective–reflective higher-order models, researchers validate lower-order constructs first, then validate the higher-order construct using SmartPLS latent variable scores by re-specifying the higher-order model where lower-order constructs become “items.” For reflective–formative higher-order models, validation shifts to formative diagnostics: check VIF for multicollinearity, test outer weights with bootstrapping, use outer loadings to judge indicator contribution, and run redundancy analysis when a global measure exists. After measurement validation, structural paths (including moderation) are bootstrapped and reported with R², f², and Q².
What practical rules help decide whether a construct is reflective or formative in PLS-SEM?
Why do higher-order constructs require validation at two levels in SmartPLS?
How does SmartPLS latent variable scoring support higher-order construct validation?
What changes when moving from reflective–reflective to reflective–formative higher-order constructs?
How should unexpected moderation results be handled in the write-up?
What do R², f², and Q² mean for evaluating a PLS-SEM model?
Review Questions
- When would removing an item from a construct be considered evidence for reflective measurement, and when would it point toward formative measurement?
- In a reflective–reflective higher-order model, what dataset transformation is used to validate the higher-order construct in SmartPLS?
- For a reflective–formative higher-order construct, which diagnostics address multicollinearity and indicator contribution, and what additional test is used when a global measure exists?
Key Points
- 1
Use arrow direction and the interchangeability test to distinguish reflective (items interchangeable) from formative (items not interchangeable) constructs.
- 2
Higher-order constructs require validating both lower-order constructs and the higher-order construct itself; measurement validation cannot stop at the subdimensions.
- 3
For reflective–reflective higher-order models, validate the higher-order construct using SmartPLS latent variable scores by re-specifying lower-order constructs as items.
- 4
For reflective–formative higher-order models, validate using formative diagnostics: VIF for multicollinearity, bootstrapped significance of outer weights, and outer loadings to confirm indicator contribution.
- 5
Address discriminant validity problems by checking correlations and cross-loadings/standard deviations, then revising indicators if needed rather than ignoring the issue.
- 6
Moderation effects belong in the structural model; unexpected moderation directions should be explained logically in discussion, not removed to “fix” results.
- 7
Report R² (explanatory power), f² (effect size), and Q² (predictive relevance) alongside hypothesis testing outcomes.