Get AI summaries of any video or article — Sign up free
#SmartPLS Workshop - Basic and Advance use of SmartPLS3 (See Description for #SmartPLS4) thumbnail

#SmartPLS Workshop - Basic and Advance use of SmartPLS3 (See Description for #SmartPLS4)

Research With Fawad·
6 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Model latent constructs by mapping each unobserved variable to its questionnaire indicators, then import the dataset into SmartPLS using CSV format.

Briefing

SmartPLS analysis starts by treating each questionnaire item set as indicators of an unobserved (latent) construct, then validating those constructs before testing relationships among them. In the workshop example, organizational learning is measured indirectly through eight indicators (a latent variable), organizational culture is measured through multiple items (another latent variable), and the model also includes mediators and moderators. After collecting responses from 341 participants and exporting the dataset as a CSV file, the workflow moves into SmartPLS where the model is drawn on a canvas and run in two major phases: measurement-model quality checks and structural-model hypothesis testing.

The first major step is structural equation modeling (SEM) as the conceptual foundation for SmartPLS. SEM is described as a family of techniques that assesses relationships among many variables simultaneously, including latent constructs, mediators, moderators, and error terms. It is positioned as more flexible than basic regression because it can handle multiple dependent variables and complex paths in one framework. Crucially, SEM is framed as theory-driven and correlation-based rather than a tool that automatically proves causation—experimental design is still required for causal claims. The workshop also distinguishes variables (directly measured scores like age or income) from constructs (hypothetical concepts like job satisfaction measured through multiple items).

Once the model is specified in SmartPLS, the measurement model is evaluated using reliability and validity criteria. Reliability is checked via Cronbach’s alpha and composite reliability, with acceptable thresholds typically above 0.70. Convergent validity is assessed using Average Variance Extracted (AVE), where values above 0.50 indicate that indicators sufficiently “converge” to represent their latent construct. Discriminant validity then verifies that constructs are distinct from one another. The workshop demonstrates multiple discriminant-validity checks: the Fornell–Larcker criterion (comparing the square root of AVE to inter-construct correlations), HTMT (reported as a ratio with common cutoffs like 0.85 or 0.90), and cross-loadings (each indicator should load highest on its own construct). Items are generally not deleted unless discriminant validity or convergent validity fails in a meaningful way.

After the measurement model passes, the structural model is tested using bootstrapping. Path coefficients (beta weights) indicate the strength of relationships, while bootstrapped t-statistics and p-values determine significance (with t-values above 1.96 and p-values below 0.05 treated as significant in the examples). For mediation, the workshop emphasizes testing the indirect effect first (the chain from IV to mediator to DV). If the direct effect remains significant alongside a significant indirect effect, the result is partial mediation; if the direct effect becomes insignificant while the indirect effect stays significant, it is full mediation.

Moderation is handled by adding an interaction term using SmartPLS’s product-indicator approach (standardized product terms). The example shows role ambiguity weakening the relationship between collaborative culture and organizational performance, then probes the interaction using simple slope logic (low vs. high moderator levels). The session also extends to higher-order constructs using hierarchical component modeling (HCM), including reflective–reflective and reflective–formative cases. For reflective–reflective higher-order constructs, a disjoint two-stage approach is used: validate lower-order dimensions first, generate latent variable scores, then validate and test the higher-order construct. For reflective–formative higher-order constructs, the workshop highlights collinearity diagnostics (VIF < 5) and the need to validate formative indicators through outer weights and outer loadings. Finally, it outlines how to report results: measurement model tables first (loadings, reliability, AVE, discriminant validity), then structural results (path coefficients, significance), with mediation and moderation presented separately when needed.

Cornell Notes

The workshop lays out a complete SmartPLS workflow for models with latent constructs, mediators, and moderators. It begins by validating the measurement model—checking reliability (Cronbach’s alpha and composite reliability), convergent validity (AVE > 0.50), and discriminant validity (Fornell–Larcker, HTMT, and cross-loadings). Only after those checks pass does it test the structural model using bootstrapping to obtain path coefficients, t-statistics, and p-values. Mediation is evaluated through specific indirect effects, distinguishing full vs. partial mediation based on whether the direct effect remains significant. Moderation is tested by adding interaction terms (product indicators) and probing the interaction with simple slope logic. The session also explains higher-order constructs via hierarchical component modeling using a two-stage approach for reflective–reflective and additional steps for reflective–formative constructs.

Why does SmartPLS require a measurement-model assessment before testing relationships among constructs?

Because latent variables (unobserved constructs) are only represented through questionnaire indicators. SmartPLS first checks whether those indicators reliably and validly measure each construct. Reliability is assessed using Cronbach’s alpha and composite reliability (commonly > 0.70). Convergent validity is assessed using AVE, where AVE should exceed 0.50. Discriminant validity then confirms constructs are distinct, using Fornell–Larcker (square root of AVE greater than inter-construct correlations), HTMT (ratio typically < 0.85 or < 0.90), and cross-loadings (each indicator loads highest on its own construct). If these quality checks fail, structural path results would be misleading.

How does the workshop determine whether a relationship is significant in the structural model?

Significance comes from bootstrapping results. After running bootstrapping (e.g., 500 subsamples in the session; commonly 5,000 recommended), SmartPLS provides path coefficients (beta weights), standard errors, t-statistics, and p-values. The workshop uses rules of thumb such as t-statistics > 1.96 and p-values < 0.05 to treat a path as significant. The beta coefficient indicates direction and strength (higher absolute weight implies stronger standardized impact).

What’s the difference between full and partial mediation in SmartPLS?

Mediation is assessed through the specific indirect effect (IV → mediator → DV). If the indirect effect is significant, mediation exists. Full mediation occurs when the direct effect (IV → DV while the mediator is in the model) is insignificant but the indirect effect is significant. Partial mediation occurs when both the direct effect and the indirect effect are significant. The workshop emphasizes checking the indirect effect first, then comparing direct vs. indirect significance to classify the mediation type.

How does moderation work in the workshop’s SmartPLS approach?

Moderation means a third variable changes the strength of an existing relationship (it strengthens, weakens, or sometimes changes it). In SmartPLS, the workshop adds a moderating effect by linking the moderator to the dependent variable of the moderated path and using a product-indicator method (standardized product terms). After bootstrapping confirms the interaction is significant, the workshop probes the interaction using simple slope logic: compare predicted effects at low (−1 SD), mean, and high (+1 SD) moderator values. The example shows role ambiguity negatively moderating the CC → OP relationship, meaning higher role ambiguity weakens the positive link.

What does discriminant validity mean, and how is it checked?

Discriminant validity ensures each latent construct is statistically distinct from other constructs—important in social science where constructs can overlap conceptually. The workshop checks it using three methods: (1) Fornell–Larcker: the square root of AVE for a construct should exceed its correlations with other constructs; (2) HTMT: the HTMT ratio should be below common thresholds (e.g., < 0.85 or < 0.90); and (3) cross-loadings: each indicator should load more strongly on its own construct than on other constructs. If discriminant validity fails, the workshop suggests deleting problematic indicators based on cross-loading patterns and threshold differences (e.g., differences less than 0.10 are treated as problematic).

How are higher-order constructs handled, and why is a two-stage approach used?

Higher-order constructs (HOCs) represent broader concepts measured through subdimensions (lower-order constructs). The workshop explains hierarchical component modeling (HCM) and focuses on reflective–reflective and reflective–formative cases. For reflective–reflective HOCs, it uses a disjoint two-stage approach: Stage 1 validates the measurement model for lower-order dimensions and generates latent variable scores; Stage 2 replaces lower-order dimensions with those scores as indicators for the higher-order construct, then re-validates and tests the structural relationships. For reflective–formative HOCs, it adds formative validation steps such as collinearity diagnostics (VIF < 5) and checking outer weights and outer loadings to ensure formative indicators contribute meaningfully.

Review Questions

  1. What specific criteria (reliability, AVE, and discriminant validity) must be satisfied before interpreting structural path coefficients in SmartPLS?
  2. In mediation testing, which effect is checked first (direct vs. indirect), and what pattern of significance corresponds to full vs. partial mediation?
  3. When moderation is significant, how does the workshop recommend probing the interaction (low/mean/high moderator levels) to interpret the direction of the effect?

Key Points

  1. 1

    Model latent constructs by mapping each unobserved variable to its questionnaire indicators, then import the dataset into SmartPLS using CSV format.

  2. 2

    Validate the measurement model first: check reliability (Cronbach’s alpha and composite reliability), convergent validity (AVE > 0.50), and discriminant validity (Fornell–Larcker, HTMT, and cross-loadings).

  3. 3

    Use bootstrapping to test structural paths; interpret significance using bootstrapped t-statistics and p-values alongside path coefficients (beta weights).

  4. 4

    Evaluate mediation through specific indirect effects; classify full vs. partial mediation by whether the direct effect remains significant when the mediator is included.

  5. 5

    Test moderation by adding interaction terms via product-indicator methods and probe the interaction using simple slope comparisons at low, mean, and high moderator values.

  6. 6

    For higher-order constructs, apply hierarchical component modeling: reflective–reflective uses a two-stage (disjoint) approach with latent variable scores; reflective–formative requires formative validation steps including collinearity diagnostics and outer weight/load checks.

Highlights

SmartPLS workflow is built around a strict order: measurement-model validation (reliability + convergent + discriminant validity) before structural-model hypothesis testing.
Mediation classification hinges on significance patterns: significant indirect effect plus insignificant direct effect indicates full mediation; both significant indicates partial mediation.
Moderation is implemented through interaction terms (product indicators) and interpreted by comparing effects at −1 SD, mean, and +1 SD moderator levels.
Higher-order constructs are handled through hierarchical component modeling, with a two-stage approach for reflective–reflective cases and extra formative validation steps for reflective–formative cases.
Discriminant validity can be confirmed using multiple lenses—Fornell–Larcker, HTMT, and cross-loadings—rather than relying on a single statistic.

Mentioned

  • IV
  • DV
  • SEM
  • PLS
  • SPSS
  • CSV
  • AVE
  • HTMT
  • HCM
  • LVS
  • BCA
  • VIF
  • HOC
  • HOC
  • IV
  • DV