Get AI summaries of any video or article — Sign up free
#SmartPLS4 Webinar Day 2: Structural Model Assessment thumbnail

#SmartPLS4 Webinar Day 2: Structural Model Assessment

Research With Fawad·
6 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Confirm measurement-model quality (factor loadings, reliability, validity) before interpreting structural paths.

Briefing

Structural model assessment in SmartPLS hinges on two linked tasks: checking whether the measurement quality holds up, and then testing whether hypothesized relationships among latent variables are statistically supported. After confirming measurement-model quality (factor loadings, reliability, and validity), the workflow also includes a common-method-bias check using collinearity statistics (VIF). In the webinar’s example, VIF values are all below 3, which—per the cited guidance—signals no serious common method bias.

With the measurement model cleared, the focus shifts to structural model assessment: determining how constructs relate to one another and whether proposed hypotheses are substantiated. The example simplifies a complex model into a straightforward setup with three predictors—Vision, Development, and Rewards—and one dependent variable, Organizational Performance. To test whether these predictors significantly influence the outcome, the analysis uses bootstrapping, a nonparametric technique suited for non-normal data. The webinar emphasizes practical bootstrapping settings: using a large number of resamples (noting older guidance of 5,000 and newer guidance of 10,000), enabling parallel processing for speed, selecting “complete/slow” only when model-fit and effect-size details are needed, and using bias-corrected and accelerated (BCa) confidence intervals for stability. For hypothesis testing, it recommends one-tailed tests when the direction of effects is specified in advance (e.g., expecting positive impacts), and it uses a 0.05 significance threshold.

Bootstrapping results then determine which paths are significant. For Vision → Organizational Performance, the beta coefficient is positive (0.229) and the p-value is below 0.05, so the relationship is supported. Development → Organizational Performance is also significant. Rewards → Organizational Performance, however, shows a very small beta (below 0.1) with a p-value of 0.335, leading to rejection of a significant effect in this study. The model’s explanatory power is summarized by R²: 58.9% of variance in Organizational Performance is accounted for by Vision, Development, and Rewards.

The webinar then expands from direct effects to more complex causal mechanisms using mediation and moderation. Mediation is framed as a third variable that transmits the effect of an independent variable to a dependent variable, requiring theoretical justification that the mediator is influenced by the IV and in turn influences the DV. The webinar distinguishes full versus partial mediation using significance patterns: full mediation occurs when the direct effect becomes insignificant while the indirect effect remains significant; partial mediation occurs when both direct and indirect effects are significant. It also clarifies how to report mediation in SmartPLS terms—total effects, direct effects, specific indirect effects, and confidence intervals—plus how to interpret mediation tables.

Next comes moderation, where a moderator changes the strength or direction of a relationship rather than pointing to a variable-to-variable causal chain. The example tests whether Role Conflict moderates the link between Collaborative Culture and Organizational Performance; it does not (p > 0.05). Role Ambiguity does moderate the relationship, with a negative interaction effect, meaning higher role ambiguity weakens the positive impact of collaborative culture. Because moderation is assessed through interaction effects, the webinar recommends slope analysis after a significant interaction to interpret how the relationship differs at low, mean, and high levels of the moderator.

Finally, the webinar addresses modeling mechanics in SmartPLS: when moderators are included, measurement-model assessment should be run without the interaction term, then duplicated to build the structural model with the moderation effect. It also introduces the conceptual groundwork for later material on higher-order constructs, distinguishing reflective versus formative measurement logic based on arrow direction and the interchangeability (reflective) versus unique contribution (formative) of indicators.

Cornell Notes

The webinar lays out a practical SmartPLS workflow for structural model assessment: after verifying measurement quality and checking common method bias via VIF (all < 3 in the example), it tests hypothesized relationships using bootstrapping. In the structural example, Vision and Development significantly predict Organizational Performance, while Rewards does not (p = 0.335). The explanatory power is summarized by R² (58.9% variance explained). It then moves beyond direct effects to mediation (mechanism through a third variable) and moderation (a variable that changes the strength/direction of a relationship), including how to decide full vs partial mediation and how to interpret significant moderation using slope analysis. It also notes key SmartPLS modeling steps for moderators: run measurement model without interaction, then duplicate for the structural model.

How does SmartPLS assess common method bias in this workflow, and what threshold is used?

Common method bias is checked using collinearity statistics (VIF) in the inner model. The webinar instructs users to click “VIF” under collinearity statistics, then select the “inner model” option. If all VIF values are less than 3, it indicates no serious common method bias, so the analysis proceeds without concern that method variance is driving the results.

Why use bootstrapping for structural model significance testing, and what settings matter most?

Bootstrapping is used for nonparametric significance testing when data may not be normally distributed. It generates subsamples via random number generation from the original sample. The webinar recommends using 10,000 resamples (with parallel processing to speed computation), selecting BCa confidence intervals for stability, and using one-tailed tests when the hypothesized direction is specified (positive in the example). The significance level is set at 0.05.

How do the results distinguish significant from non-significant paths in the structural model example?

Significance is determined using bootstrapped p-values (and corresponding t-statistics). Vision → Organizational Performance has beta = 0.229 with p < 0.05, so it is significant. Development → Organizational Performance is also significant. Rewards → Organizational Performance has a very small beta (below 0.1) and p = 0.335, so it is not significant. The webinar also uses R² to report explained variance: 58.9% of Organizational Performance variance is explained by Vision, Development, and Rewards.

What logic determines full vs partial mediation in SmartPLS mediation results?

Full mediation occurs when the direct effect (IV → DV, with the mediator included) is insignificant while the indirect effect is significant. Partial mediation occurs when both direct and indirect effects are significant, meaning some influence passes through the mediator and some remains direct. The webinar also emphasizes using specific indirect effects (e.g., IV → Mediator → DV) and confidence intervals: if the interval excludes zero, the indirect effect is significant.

How is moderation interpreted, and why is slope analysis recommended after a significant interaction?

Moderation is tested through an interaction effect between the independent variable and the moderator; the moderator changes the relationship to the dependent variable. In the example, Role Ambiguity moderates the link between Collaborative Culture and Organizational Performance with a negative interaction effect, weakening the positive relationship. Slope analysis clarifies the pattern by showing how the IV → DV relationship differs at low, mean, and high moderator levels—steeper slopes at low ambiguity versus flatter slopes at high ambiguity indicate the dampening effect.

What SmartPLS modeling step is required when moderators are included in the measurement model?

When moderators are present, the measurement model should be run without the interaction term. The webinar instructs users to connect moderators to the dependent variable of the relationship they moderate, run the measurement model for reliability/validity, then duplicate the model to build the structural model with the interaction effect included.

Review Questions

  1. In the structural model example, which paths were significant and what were the p-values used to reach those conclusions?
  2. What pattern of direct and indirect effects indicates full mediation versus partial mediation?
  3. Why does moderation require an interaction term, and how does slope analysis help interpret a negative moderation effect?

Key Points

  1. 1

    Confirm measurement-model quality (factor loadings, reliability, validity) before interpreting structural paths.

  2. 2

    Check common method bias in SmartPLS using VIF; VIF values below 3 indicate no serious common method bias.

  3. 3

    Use bootstrapping for structural significance testing, typically with 10,000 resamples, BCa confidence intervals, and one-tailed tests when effect direction is hypothesized.

  4. 4

    Interpret structural results using beta coefficients, p-values (or t-statistics with the correct one-tailed/two-tailed cutoff), and R² for explained variance.

  5. 5

    Report mediation by distinguishing direct effects from specific indirect effects and classify full vs partial mediation based on significance patterns.

  6. 6

    Test moderation through interaction effects; if significant, use slope analysis to interpret how the IV→DV relationship changes at low/mean/high moderator levels.

  7. 7

    When moderators are included, run the measurement model without the interaction term, then duplicate to create the structural model with moderation effects.

Highlights

VIF-based common method bias check: all VIF values below 3 means no serious common method bias, allowing structural testing to proceed.
In the simplified structural model, Vision and Development significantly predict Organizational Performance, while Rewards does not (p = 0.335).
Mediation classification rule of thumb: full mediation appears when the direct effect becomes insignificant but the indirect effect remains significant; partial mediation appears when both are significant.
Moderation is about relationships, not variables: a negative interaction effect for Role Ambiguity means higher ambiguity weakens the positive effect of Collaborative Culture on Organizational Performance.
SmartPLS modeling mechanics matter: measure reliability/validity without interaction terms, then add the interaction in the structural model.

Topics

Mentioned

  • VIF
  • SEM
  • PLS
  • DV
  • IV
  • BCa
  • BCa confidence intervals
  • BCa bootstrap
  • BCa