2. Reflective-Reflective Higher Order Construct/Second Order Analysis and Reporting in SmartPLS
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Reflective–reflective higher order constructs can be modeled in SmartPLS using the repeated indicators approach, despite conceptual critiques about unidimensionality.
Briefing
Reflective–reflective higher order constructs (HOCs) in SmartPLS are treated as a workable, literature-supported configuration—even though critics argue they may be conceptually “meaningless.” In this setup, a higher order construct (HOC) is measured reflectively by indicators that are themselves the indicators of multiple lower order constructs (LOCs). The practical implication is that researchers can model a single overarching concept (the HOC) while still retaining distinct subdimensions (the LOCs), then assess measurement quality and structural relationships in a disciplined two-stage workflow.
The core specification relies on the repeated indicators approach. All items belonging to each reflective lower order construct are simultaneously assigned to the reflective measurement model of the higher order construct. In SmartPLS terms, indicators from the LOCs are “loaded onto” the HOC, with arrows reflecting the reflective–reflective logic: the HOC points to its indicators (which are the LOC items), and the LOCs are treated as subdimensions contributing to the HOC. For structural modeling, antecedent constructs connect directly to the HOC rather than to the LOCs, and the HOC connects directly to the criterion variable.
Because reflective–reflective HOCs are debated, the workflow emphasizes measurement validity rather than assuming it. Quality assessment starts with the LOCs: reliability and convergent validity are checked using standard reflective criteria such as Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE), alongside discriminant validity checks (including HTMT). After LOCs pass these checks, the analysis moves to the HOC itself.
SmartPLS estimation for reflective–reflective HOCs uses the disjoint two-stage approach. Stage 1 estimates a model containing only the LOCs (including exogenous and endogenous constructs) without specifying the HOC. Stage 2 then uses latent variable scores (LVS) generated from Stage 1 as the indicators for the HOC, while the LOCs remain as constructs in the structural model. This separation is crucial: it ensures the HOC’s measurement properties are evaluated using the LVS-based representation rather than mixing estimation steps.
For reporting, the transcript lays out a results chapter structure: introduce the analysis technique and hypotheses, report the measurement model, then report structural model outcomes. Measurement reporting begins with factor (outer) loadings, then multicollinearity via VIF (acceptable when below 5), followed by reliability (Cronbach’s alpha and CR) and convergent validity (AVE, with the guidance that CR above 0.70 can support validity even when AVE is slightly low). Discriminant validity is reported using Fornell–Larcker criterion, cross-loadings, and/or HTMT (with HTMT used as a key check).
Finally, structural evaluation uses path coefficients and significance testing via bootstrapping (with 5,000 recommended). In the worked example, the higher order construct CSR shows a significant positive effect on organizational performance (organizational performance as the criterion). Mediation through team identity is assessed via indirect effects: the transcript notes that mediation depends on p-values (e.g., no mediation when p > 0.05, but partial mediation can appear under a looser significance threshold such as 0.10). The overall takeaway is a repeatable method: validate LOCs first, generate LVS for the HOC through disjoint two-stage estimation, then report measurement and structural results consistently in SmartPLS.
Cornell Notes
Reflective–reflective higher order constructs in SmartPLS are modeled using the repeated indicators approach and estimated with a disjoint two-stage procedure. All indicators from reflective lower order constructs are assigned to the higher order construct’s reflective measurement model, while structural paths connect the higher order construct directly to antecedents/criteria. Measurement quality is assessed in two layers: first the lower order constructs (outer loadings, Cronbach’s alpha, composite reliability, AVE, and discriminant validity via Fornell–Larcker and HTMT), then the higher order construct using latent variable scores generated in stage 1. Structural relationships are tested with bootstrapping, and mediation is evaluated through indirect effects and significance of p-values. This workflow matters because it provides a defensible way to validate an HOC despite ongoing conceptual debates about reflective–reflective configurations.
Why do reflective–reflective higher order constructs face criticism, and how does the repeated indicators approach respond to that concern?
What exactly happens in SmartPLS during the disjoint two-stage approach for reflective–reflective HOCs?
How are reliability, convergent validity, and discriminant validity assessed for lower order constructs before validating the HOC?
How does the transcript recommend reporting measurement results in a thesis or paper?
How is mediation tested when the higher order construct affects a criterion through a mediator (team identity)?
Review Questions
- In reflective–reflective HOCs, what indicators are used for the higher order construct in stage 2, and where do those indicators come from?
- Which discriminant validity checks are recommended (and how do they differ) when reporting lower order and higher order constructs?
- What decision rule based on p-values determines whether mediation is supported in the transcript’s example?
Key Points
- 1
Reflective–reflective higher order constructs can be modeled in SmartPLS using the repeated indicators approach, despite conceptual critiques about unidimensionality.
- 2
Repeated indicators means all items from reflective lower order constructs are simultaneously assigned to the higher order construct’s reflective measurement model.
- 3
Use the disjoint two-stage approach: estimate lower order constructs first (stage 1), then use latent variable scores as indicators for the higher order construct (stage 2).
- 4
Validate measurement quality in sequence: reliability and validity for lower order constructs first, then reliability and discriminant validity for the higher order construct.
- 5
Report factor loadings, VIF (multicollinearity), reliability (Cronbach’s alpha and CR), convergent validity (AVE with CR interpretation), and discriminant validity (Fornell–Larcker and HTMT).
- 6
Evaluate structural relationships with bootstrapping and report path coefficients with t-values and p-values.
- 7
Test mediation using indirect effects and p-values, distinguishing total, direct, and indirect effects to determine partial vs. no mediation.