Get AI summaries of any video or article — Sign up free
2. Reflective-Reflective Higher Order Construct/Second Order Analysis and Reporting in SmartPLS thumbnail

2. Reflective-Reflective Higher Order Construct/Second Order Analysis and Reporting in SmartPLS

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Reflective–reflective higher order constructs can be modeled in SmartPLS using the repeated indicators approach, despite conceptual critiques about unidimensionality.

Briefing

Reflective–reflective higher order constructs (HOCs) in SmartPLS are treated as a workable, literature-supported configuration—even though critics argue they may be conceptually “meaningless.” In this setup, a higher order construct (HOC) is measured reflectively by indicators that are themselves the indicators of multiple lower order constructs (LOCs). The practical implication is that researchers can model a single overarching concept (the HOC) while still retaining distinct subdimensions (the LOCs), then assess measurement quality and structural relationships in a disciplined two-stage workflow.

The core specification relies on the repeated indicators approach. All items belonging to each reflective lower order construct are simultaneously assigned to the reflective measurement model of the higher order construct. In SmartPLS terms, indicators from the LOCs are “loaded onto” the HOC, with arrows reflecting the reflective–reflective logic: the HOC points to its indicators (which are the LOC items), and the LOCs are treated as subdimensions contributing to the HOC. For structural modeling, antecedent constructs connect directly to the HOC rather than to the LOCs, and the HOC connects directly to the criterion variable.

Because reflective–reflective HOCs are debated, the workflow emphasizes measurement validity rather than assuming it. Quality assessment starts with the LOCs: reliability and convergent validity are checked using standard reflective criteria such as Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE), alongside discriminant validity checks (including HTMT). After LOCs pass these checks, the analysis moves to the HOC itself.

SmartPLS estimation for reflective–reflective HOCs uses the disjoint two-stage approach. Stage 1 estimates a model containing only the LOCs (including exogenous and endogenous constructs) without specifying the HOC. Stage 2 then uses latent variable scores (LVS) generated from Stage 1 as the indicators for the HOC, while the LOCs remain as constructs in the structural model. This separation is crucial: it ensures the HOC’s measurement properties are evaluated using the LVS-based representation rather than mixing estimation steps.

For reporting, the transcript lays out a results chapter structure: introduce the analysis technique and hypotheses, report the measurement model, then report structural model outcomes. Measurement reporting begins with factor (outer) loadings, then multicollinearity via VIF (acceptable when below 5), followed by reliability (Cronbach’s alpha and CR) and convergent validity (AVE, with the guidance that CR above 0.70 can support validity even when AVE is slightly low). Discriminant validity is reported using Fornell–Larcker criterion, cross-loadings, and/or HTMT (with HTMT used as a key check).

Finally, structural evaluation uses path coefficients and significance testing via bootstrapping (with 5,000 recommended). In the worked example, the higher order construct CSR shows a significant positive effect on organizational performance (organizational performance as the criterion). Mediation through team identity is assessed via indirect effects: the transcript notes that mediation depends on p-values (e.g., no mediation when p > 0.05, but partial mediation can appear under a looser significance threshold such as 0.10). The overall takeaway is a repeatable method: validate LOCs first, generate LVS for the HOC through disjoint two-stage estimation, then report measurement and structural results consistently in SmartPLS.

Cornell Notes

Reflective–reflective higher order constructs in SmartPLS are modeled using the repeated indicators approach and estimated with a disjoint two-stage procedure. All indicators from reflective lower order constructs are assigned to the higher order construct’s reflective measurement model, while structural paths connect the higher order construct directly to antecedents/criteria. Measurement quality is assessed in two layers: first the lower order constructs (outer loadings, Cronbach’s alpha, composite reliability, AVE, and discriminant validity via Fornell–Larcker and HTMT), then the higher order construct using latent variable scores generated in stage 1. Structural relationships are tested with bootstrapping, and mediation is evaluated through indirect effects and significance of p-values. This workflow matters because it provides a defensible way to validate an HOC despite ongoing conceptual debates about reflective–reflective configurations.

Why do reflective–reflective higher order constructs face criticism, and how does the repeated indicators approach respond to that concern?

Critics argue that reflective measures should be unidimensional and interchangeable, so a reflective–reflective HOC might be conceptually “meaningless.” Others counter that multiple underlying dimensions can be distinct in nature. The repeated indicators approach operationalizes the configuration by assigning all items from each reflective lower order construct to the higher order construct’s reflective measurement model. That lets the HOC represent the shared higher-level concept while still preserving the distinct subdimension item sets as its indicators.

What exactly happens in SmartPLS during the disjoint two-stage approach for reflective–reflective HOCs?

Stage 1 estimates a model containing only the lower order constructs (including exogenous and endogenous constructs) without specifying the higher order construct. Stage 2 then takes the latent variable scores (LVS) produced for the lower order constructs from stage 1 and uses those LVS as indicators for the higher order construct. The structural model is then evaluated with the higher order construct represented by LVS, while the lower order constructs remain part of the model structure.

How are reliability, convergent validity, and discriminant validity assessed for lower order constructs before validating the HOC?

Lower order constructs are evaluated using reflective measurement criteria: outer loadings (factor loadings), reliability via Cronbach’s alpha and composite reliability (CR), and convergent validity via AVE (with the transcript noting CR > 0.70 as a support even when AVE is slightly below 0.50). Discriminant validity is checked using Fornell–Larcker (square root of AVE greater than correlations with other constructs) and HTMT (with HTMT thresholds discussed as acceptable when below about 0.85).

How does the transcript recommend reporting measurement results in a thesis or paper?

It recommends a structured results chapter: (1) brief introduction of the analysis technique and hypotheses, (2) measurement model reporting starting with factor loadings (outer loadings), (3) multicollinearity via VIF (acceptable when VIF < 5), (4) reliability reporting (Cronbach’s alpha and CR, typically all above 0.70), (5) convergent validity reporting (AVE, with interpretation tied to CR), and (6) discriminant validity reporting using Fornell–Larcker and HTMT (cross-loadings as an additional check).

How is mediation tested when the higher order construct affects a criterion through a mediator (team identity)?

Mediation is assessed via indirect effects and significance of p-values. The transcript’s example reports that if the indirect effect p-value is greater than 0.05, mediation is not supported; with a more lenient significance level (e.g., 0.10), a partial mediating role can be claimed. It also distinguishes total effect, direct effect (with mediator included), and indirect effect (through the mediator) to show whether the mediator carries part of the relationship.

Review Questions

  1. In reflective–reflective HOCs, what indicators are used for the higher order construct in stage 2, and where do those indicators come from?
  2. Which discriminant validity checks are recommended (and how do they differ) when reporting lower order and higher order constructs?
  3. What decision rule based on p-values determines whether mediation is supported in the transcript’s example?

Key Points

  1. 1

    Reflective–reflective higher order constructs can be modeled in SmartPLS using the repeated indicators approach, despite conceptual critiques about unidimensionality.

  2. 2

    Repeated indicators means all items from reflective lower order constructs are simultaneously assigned to the higher order construct’s reflective measurement model.

  3. 3

    Use the disjoint two-stage approach: estimate lower order constructs first (stage 1), then use latent variable scores as indicators for the higher order construct (stage 2).

  4. 4

    Validate measurement quality in sequence: reliability and validity for lower order constructs first, then reliability and discriminant validity for the higher order construct.

  5. 5

    Report factor loadings, VIF (multicollinearity), reliability (Cronbach’s alpha and CR), convergent validity (AVE with CR interpretation), and discriminant validity (Fornell–Larcker and HTMT).

  6. 6

    Evaluate structural relationships with bootstrapping and report path coefficients with t-values and p-values.

  7. 7

    Test mediation using indirect effects and p-values, distinguishing total, direct, and indirect effects to determine partial vs. no mediation.

Highlights

The repeated indicators approach assigns every lower order item to the higher order construct’s reflective measurement model, enabling a reflective–reflective HOC in SmartPLS.
Disjoint two-stage estimation prevents mixing steps: stage 1 estimates LOCs only, and stage 2 builds the HOC from latent variable scores.
Measurement reporting is treated as a two-layer task—LOC validity first, then HOC validity—because many studies omit HOC validity checks.
Mediation conclusions hinge on indirect-effect p-values; the transcript notes “no mediation” at p > 0.05 but possible partial mediation at p < 0.10.

Topics

  • Reflective–Reflective HOCs
  • Repeated Indicators
  • Disjoint Two-Stage Estimation
  • Measurement Model Reporting
  • Mediation in SmartPLS

Mentioned

  • HCM
  • HOC
  • HTMT
  • H1
  • H2
  • H3
  • CR
  • AVE
  • LVS
  • LOC
  • PLS
  • VIF
  • TI
  • DC
  • EC
  • ECC
  • LVS
  • MS Word
  • SPSS
  • CSV
  • LVS score
  • outer loadings
  • bootstrapping
  • P value
  • t-value
  • q^2
  • Q^2
  • PLSpredict