SmartPLS3 - Analyze, Interpret, and Report Higher Order Reflective-Formative Construct (Uptd)
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Validate every lower-order reflective construct first using outer loadings, reliability (Cronbach’s alpha and composite reliability > 0.70), convergent validity (e.g., > 0.5), and discriminant validity (HTMT, typically < 0.85).
Briefing
Validating a higher-order reflective–formative construct in SmartPLS requires a two-stage workflow: first prove the lower-order reflective measures are sound, then test whether the higher-order formative composite (and its indicators) reliably and validly represent the construct. In the example, “internal marketing” is modeled as a higher-order construct that is formative at the second level, built from lower-level dimensions—vision development, rewards, and organizational performance—each measured reflectively. The process starts by validating every lower-order reflective construct because the higher-order model cannot be assessed directly without first establishing measurement quality at the indicator level.
For the lower-order reflective blocks, the workflow checks outer loadings, reliability, and validity. Outer loadings are expected to be acceptable (in the example, they were all “good”), followed by reliability using Cronbach’s alpha and composite reliability, both reported as above the 0.70 threshold. Convergent validity is assessed through average variance extracted–type criteria (the example uses the “over 0.5” rule for convergent validity), and discriminant validity is evaluated primarily with HTMT (all values green and below a conservative threshold of 0.85; a more liberal 0.90 cutoff is also mentioned). The Fornell–Larcker criterion is also referenced as a cross-check: within-construct variance should exceed shared variance with other constructs.
Once the lower-order constructs pass, the higher-order formative construct is validated using a disjoint two-stage approach. First, latent variable scores for the lower-order constructs are generated in stage one (the example notes a sample size of 341). These scores are exported to Excel and re-imported into SmartPLS as indicators for the higher-order construct in stage two. Because the higher-order construct is formative, the arrow directions are set so that the lower-order latent variable scores form “internal marketing.”
Higher-order convergent validity is then tested via redundancy analysis, a method associated with Chin (1998). This requires a global single-item measure capturing the overall essence of internal marketing—here, an item like “the organization has proper vision, has development initiatives and rewards its employees.” The correlation between the formative higher-order construct (internal marketing) and this global reflective measure is assessed through a path coefficient; the example reports a value of 0.8, comfortably above the 0.708 benchmark, indicating convergent validity.
Next comes collinearity diagnostics for the formative indicators using VIF values; the example reports VIFs around 2.65 and below 5, indicating no multicollinearity concerns. Then the model tests the significance and relevance of the formative “outer weights” through bootstrapping (the example uses 500 resamples for demonstration, noting 5,000–10,000 as typical). Two of the formative indicators show significant outer weights, while “rewards” is insignificant. Crucially, insignificance does not automatically mean deletion: the decision should consider outer loadings, and the example keeps the indicator because its outer loading remains above 0.5 and is significant. Finally, the relationship between internal marketing (higher-order) and the dependent variable (organizational performance) is assessed using bootstrapped path coefficients, with internal marketing showing a significant influence.
In short: validate reflective lower-order blocks first (loadings, reliability, convergent and discriminant validity), then validate the formative higher-order construct with redundancy analysis, VIF checks, bootstrapped outer weights, and outer loadings—before interpreting the structural link to organizational performance.
Cornell Notes
The workflow for a higher-order reflective–formative construct in SmartPLS starts by validating the lower-order reflective dimensions (outer loadings, reliability via Cronbach’s alpha and composite reliability > 0.70, convergent validity via thresholds like > 0.5, and discriminant validity via HTMT—typically < 0.85, with 0.90 as a more liberal cutoff). After those checks pass, the higher-order formative construct is validated using a disjoint two-stage approach: generate latent variable scores for the lower-order constructs, import them as indicators for the higher-order construct, and set formative arrow directions. Convergent validity for the formative higher-order construct is tested through redundancy analysis against a global single-item measure; a path coefficient above 0.708 supports validity. Collinearity is checked with VIF (< 5), then bootstrapping tests outer weights and outer loadings to confirm the formative measurement model before assessing the structural path to organizational performance.
Why can’t the higher-order construct be validated immediately, and what does “crossing the lower level” mean in practice?
What are the key checks for lower-order reflective constructs in this workflow?
How does redundancy analysis validate a formative higher-order construct’s convergent validity?
What does the disjoint two-stage approach do when building a higher-order formative construct?
If a formative indicator’s outer weight is insignificant, should it be deleted?
What final step confirms whether higher-order internal marketing matters for the outcome?
Review Questions
- What thresholds are used for reliability, convergent validity, and discriminant validity when validating the lower-order reflective constructs?
- Describe the sequence of steps used to validate the higher-order formative construct, including redundancy analysis and the role of a global single-item measure.
- When an outer weight is insignificant in a formative model, what additional evidence is used to decide whether to keep or remove the indicator?
Key Points
- 1
Validate every lower-order reflective construct first using outer loadings, reliability (Cronbach’s alpha and composite reliability > 0.70), convergent validity (e.g., > 0.5), and discriminant validity (HTMT, typically < 0.85).
- 2
Use a disjoint two-stage approach for higher-order reflective–formative models: generate latent variable scores for lower-order constructs, export/import them, then use them as indicators for the higher-order formative construct.
- 3
Test convergent validity of the formative higher-order construct with redundancy analysis by correlating it with a global single-item reflective measure; use the 0.708 path-coefficient benchmark.
- 4
Check formative indicator collinearity using VIF values; keep indicators when VIFs are below 5 (the example reports ~2.65).
- 5
Assess formative indicator relevance through bootstrapping: evaluate outer weights for significance and confirm measurement quality using outer loadings (e.g., > 0.5).
- 6
Do not delete a formative indicator solely because its outer weight is insignificant; outer loadings and significance should guide the decision.
- 7
After measurement validation, interpret the structural relationship by bootstrapping path coefficients from the higher-order construct to the dependent variable.