#SmartPLS Workshop - Basic and Advance use of SmartPLS3 (See Description for #SmartPLS4)
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Model latent constructs by mapping each unobserved variable to its questionnaire indicators, then import the dataset into SmartPLS using CSV format.
Briefing
SmartPLS analysis starts by treating each questionnaire item set as indicators of an unobserved (latent) construct, then validating those constructs before testing relationships among them. In the workshop example, organizational learning is measured indirectly through eight indicators (a latent variable), organizational culture is measured through multiple items (another latent variable), and the model also includes mediators and moderators. After collecting responses from 341 participants and exporting the dataset as a CSV file, the workflow moves into SmartPLS where the model is drawn on a canvas and run in two major phases: measurement-model quality checks and structural-model hypothesis testing.
The first major step is structural equation modeling (SEM) as the conceptual foundation for SmartPLS. SEM is described as a family of techniques that assesses relationships among many variables simultaneously, including latent constructs, mediators, moderators, and error terms. It is positioned as more flexible than basic regression because it can handle multiple dependent variables and complex paths in one framework. Crucially, SEM is framed as theory-driven and correlation-based rather than a tool that automatically proves causation—experimental design is still required for causal claims. The workshop also distinguishes variables (directly measured scores like age or income) from constructs (hypothetical concepts like job satisfaction measured through multiple items).
Once the model is specified in SmartPLS, the measurement model is evaluated using reliability and validity criteria. Reliability is checked via Cronbach’s alpha and composite reliability, with acceptable thresholds typically above 0.70. Convergent validity is assessed using Average Variance Extracted (AVE), where values above 0.50 indicate that indicators sufficiently “converge” to represent their latent construct. Discriminant validity then verifies that constructs are distinct from one another. The workshop demonstrates multiple discriminant-validity checks: the Fornell–Larcker criterion (comparing the square root of AVE to inter-construct correlations), HTMT (reported as a ratio with common cutoffs like 0.85 or 0.90), and cross-loadings (each indicator should load highest on its own construct). Items are generally not deleted unless discriminant validity or convergent validity fails in a meaningful way.
After the measurement model passes, the structural model is tested using bootstrapping. Path coefficients (beta weights) indicate the strength of relationships, while bootstrapped t-statistics and p-values determine significance (with t-values above 1.96 and p-values below 0.05 treated as significant in the examples). For mediation, the workshop emphasizes testing the indirect effect first (the chain from IV to mediator to DV). If the direct effect remains significant alongside a significant indirect effect, the result is partial mediation; if the direct effect becomes insignificant while the indirect effect stays significant, it is full mediation.
Moderation is handled by adding an interaction term using SmartPLS’s product-indicator approach (standardized product terms). The example shows role ambiguity weakening the relationship between collaborative culture and organizational performance, then probes the interaction using simple slope logic (low vs. high moderator levels). The session also extends to higher-order constructs using hierarchical component modeling (HCM), including reflective–reflective and reflective–formative cases. For reflective–reflective higher-order constructs, a disjoint two-stage approach is used: validate lower-order dimensions first, generate latent variable scores, then validate and test the higher-order construct. For reflective–formative higher-order constructs, the workshop highlights collinearity diagnostics (VIF < 5) and the need to validate formative indicators through outer weights and outer loadings. Finally, it outlines how to report results: measurement model tables first (loadings, reliability, AVE, discriminant validity), then structural results (path coefficients, significance), with mediation and moderation presented separately when needed.
Cornell Notes
The workshop lays out a complete SmartPLS workflow for models with latent constructs, mediators, and moderators. It begins by validating the measurement model—checking reliability (Cronbach’s alpha and composite reliability), convergent validity (AVE > 0.50), and discriminant validity (Fornell–Larcker, HTMT, and cross-loadings). Only after those checks pass does it test the structural model using bootstrapping to obtain path coefficients, t-statistics, and p-values. Mediation is evaluated through specific indirect effects, distinguishing full vs. partial mediation based on whether the direct effect remains significant. Moderation is tested by adding interaction terms (product indicators) and probing the interaction with simple slope logic. The session also explains higher-order constructs via hierarchical component modeling using a two-stage approach for reflective–reflective and additional steps for reflective–formative constructs.
Why does SmartPLS require a measurement-model assessment before testing relationships among constructs?
How does the workshop determine whether a relationship is significant in the structural model?
What’s the difference between full and partial mediation in SmartPLS?
How does moderation work in the workshop’s SmartPLS approach?
What does discriminant validity mean, and how is it checked?
How are higher-order constructs handled, and why is a two-stage approach used?
Review Questions
- What specific criteria (reliability, AVE, and discriminant validity) must be satisfied before interpreting structural path coefficients in SmartPLS?
- In mediation testing, which effect is checked first (direct vs. indirect), and what pattern of significance corresponds to full vs. partial mediation?
- When moderation is significant, how does the workshop recommend probing the interaction (low/mean/high moderator levels) to interpret the direction of the effect?
Key Points
- 1
Model latent constructs by mapping each unobserved variable to its questionnaire indicators, then import the dataset into SmartPLS using CSV format.
- 2
Validate the measurement model first: check reliability (Cronbach’s alpha and composite reliability), convergent validity (AVE > 0.50), and discriminant validity (Fornell–Larcker, HTMT, and cross-loadings).
- 3
Use bootstrapping to test structural paths; interpret significance using bootstrapped t-statistics and p-values alongside path coefficients (beta weights).
- 4
Evaluate mediation through specific indirect effects; classify full vs. partial mediation by whether the direct effect remains significant when the mediator is included.
- 5
Test moderation by adding interaction terms via product-indicator methods and probe the interaction using simple slope comparisons at low, mean, and high moderator values.
- 6
For higher-order constructs, apply hierarchical component modeling: reflective–reflective uses a two-stage (disjoint) approach with latent variable scores; reflective–formative requires formative validation steps including collinearity diagnostics and outer weight/load checks.