Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series 18 - How to Analyze Higher Order Reflective Formative Construct? thumbnail

#SmartPLS4 Series 18 - How to Analyze Higher Order Reflective Formative Construct?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Validate every lower-order construct first; higher-order validation depends on those components being reliable and valid.

Briefing

Validating a higher-order construct that behaves differently across levels—reflective at the lower level but formative at the higher level—requires more than running one set of checks. The core requirement is to validate every lower-order component first, generate their latent variable scores, and then use those scores to build and test the higher-order model under formative rules.

The session focuses on a “disjoint two-stage approach” for SmartPLS4 as an alternative to the repeated indicators method, which can run into complications when higher-order constructs are involved. In the disjoint version, stage one estimates the measurement model using only the lower-order constructs (no higher-order construct is placed in the path model). Researchers then save the latent variable scores for just the lower-order components that feed the higher-order construct. Stage two uses those saved scores as inputs to form the higher-order construct, while constructs that do not belong to the higher-order layer remain modeled at the indicator level.

A concrete example is used: “Internal Marketing” is treated as a higher-order construct that is reflective at the lower level but formative at the higher level. Three subdimensions—Vision development, and Rewards—are reflective at the lower level and together form Internal Marketing at the higher level. Meanwhile, other constructs in the model (including Internal Service Quality) are handled as reflective–reflective higher-order constructs, but the practical walkthrough centers on the reflective–formative case.

The workflow starts by validating the lower-order measurement models: reliability and validity checks are performed first, including outer loadings, reliability metrics, and discriminant validity. Only after those lower-level constructs pass do latent variable scores be exported (e.g., to CSV) and re-imported into SmartPLS4 to create the stage-two model.

For the higher-order reflective–formative construct in stage two, the evaluation follows formative measurement-model criteria. Convergent validity is assessed by linking the higher-order construct to a global measure—an overall item or scale capturing the entire construct (for Internal Marketing, an example global statement is about whether the organization provides proper rewards and development initiatives). Next comes collinearity diagnostics: VIF values for the formative lower-order components must stay below a threshold (the session uses < 5). Then the model checks formative indicator behavior through bootstrapping: outer weights must be significant; if an outer weight is insignificant, the decision pivots to outer loadings and their significance (indicators with low, non-significant loadings are candidates for removal).

The takeaway is procedural and strict: lower-order validation is not optional, latent variable scores are the bridge between stages, and formative higher-order constructs demand checks for convergent validity, multicollinearity, and the statistical contribution of each formative component. The result is a defensible higher-order measurement model that matches how the construct functions across levels—reflective below, formative above—without relying on repeated indicators.

Cornell Notes

Higher-order constructs that are reflective at the lower level but formative at the higher level should be validated in two stages using SmartPLS4’s disjoint two-stage approach. Stage one estimates and validates all lower-order measurement models, then exports latent variable scores for only the lower-order components that will form the higher-order construct. Stage two rebuilds the higher-order model using those scores as formative indicators, while unrelated constructs remain at the indicator level. Higher-order validation then follows formative rules: assess convergent validity via a global measure, check VIF values for collinearity (target < 5), and use bootstrapping to test outer weights and, when needed, outer loadings to decide whether indicators should be retained. This prevents skipping critical lower-level checks and ensures the higher-order construct is statistically sound.

Why does the disjoint two-stage approach matter for reflective–formative higher-order constructs?

It separates estimation and validation into two clean steps. Stage one models only lower-order constructs (no higher-order construct in the path model), so reliability and validity can be established for each component. Stage two then uses the saved latent variable scores from stage one to form the higher-order construct as formative indicators. This avoids issues that can arise with repeated indicators when higher-order constructs are involved, and it matches the logic of reflective-at-lower-level components feeding a formative-at-higher-level construct.

What exactly changes between stage one and stage two in SmartPLS4 for the disjoint approach?

In stage one, all lower-order constructs are included with their original indicators, and the model is estimated without the higher-order construct. After validating those lower-order models, latent variable scores for the relevant lower-order components are exported (e.g., to CSV) and re-imported. In stage two, the higher-order construct is created using those latent variable scores as formative indicators, and the arrows are updated so the higher-order construct is formed from the three lower-order components (e.g., Vision development, and Rewards). Constructs not part of the higher-order layer remain modeled using their indicators rather than their latent scores.

How is convergent validity assessed for a reflective–formative higher-order construct?

Convergent validity at the higher-order level is assessed by linking the higher-order construct to a global measure of the construct—an overall item or summary scale that captures the entire higher-order concept. For example, for Internal Marketing, a global statement can reflect whether the organization provides proper rewards and development initiatives for employees. This global measure must exist at questionnaire design time; otherwise, the session notes that other criteria would have to be used.

What collinearity check is required for formative indicators at the higher order?

The session uses VIF values for the lower-order components (formative indicators of the higher-order construct). The rule of thumb applied is that VIF values should be less than 5. If VIF exceeds the threshold, the formative measurement model is considered negatively affected by multicollinearity and would require remediation.

What happens if a formative indicator’s outer weight is not significant?

The procedure shifts to outer loadings. If an outer weight is insignificant, the indicator is not automatically removed. Instead, the outer loading is examined: if the outer loading is sufficiently high and significant (the session references keeping indicators when loadings are above a practical cutoff such as 0.5 and significant), the indicator can remain. If the outer loading is low (e.g., below 0.5) and not significant, the indicator becomes a candidate for removal because it contributes weakly to the formative higher-order construct.

What is the minimum order of operations to avoid invalid results?

First validate all lower-order constructs (reliability, outer loadings, reliability/validity, and discriminant validity). Only after those pass should latent variable scores be exported and used to build the higher-order model. Then validate the higher-order construct using formative criteria: convergent validity via a global measure, VIF for collinearity, and bootstrapped tests of outer weights and outer loadings.

Review Questions

  1. In the disjoint two-stage approach, what information is exported from stage one, and how is it used to construct the higher-order model in stage two?
  2. For a reflective–formative higher-order construct, which three higher-order checks are emphasized (and what thresholds or decision rules are applied)?
  3. If an indicator’s outer weight is insignificant in the higher-order formative model, what is the next diagnostic step and what outcome leads to removal?

Key Points

  1. 1

    Validate every lower-order construct first; higher-order validation depends on those components being reliable and valid.

  2. 2

    Use the disjoint two-stage approach: stage one estimates only lower-order constructs, stage two builds the higher-order construct from exported latent variable scores.

  3. 3

    In stage two, formative indicators of the higher-order construct come from latent variable scores, while constructs not in the higher-order layer remain indicator-based.

  4. 4

    Assess higher-order convergent validity using a global measure that summarizes the entire higher-order construct.

  5. 5

    Check formative collinearity using VIF values for the higher-order formative indicators; keep VIF below 5.

  6. 6

    Use bootstrapping to test outer weights; if outer weights are insignificant, evaluate outer loadings to decide whether to retain or remove indicators.

Highlights

The disjoint two-stage approach prevents skipping lower-level validation by forcing a full reliability/validity check before latent scores feed the higher-order model.
Higher-order reflective–formative convergent validity hinges on having a global measure for the entire higher-order construct, designed before data collection.
Formative indicator quality is judged through outer weights and, when needed, outer loadings—paired with VIF checks to rule out multicollinearity.

Topics

Mentioned