#SmartPLS4 Webinar Day 4: Complex Modelling Example using SmartPLS
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start with data screening: check min/max values, missing data, outliers, and respondent misconduct using standard-deviation criteria before any modeling.
Briefing
Structural equation modeling in SmartPLS starts with disciplined data screening and then moves through a two-stage quality check: measurement model assessment (reliability and validity) before any hypothesis testing in the structural model. The workflow begins by cleaning the dataset—checking minimum/maximum values, handling missing data, identifying outliers, and screening for respondent misconduct using standard-deviation logic. Once the data are fit, the measurement model is evaluated using factor loadings, reliability (Cronbach’s alpha and composite reliability), and construct validity. Convergent validity is assessed via Average Variance Extracted (AVE), with AVE needing to exceed 0.50. Discriminant validity is then tested to ensure constructs are distinct rather than overlapping, using HTMT (with conservative guidance often <0.85) and the Fornell–Larcker criterion (square root of AVE should exceed inter-construct correlations). Cross-loadings can also be used as a practical diagnostic when discriminant validity looks weak.
After the measurement model is sound, SmartPLS can handle more complex research designs—higher-order constructs, mediators, moderators, and comparisons across groups. For multi-group analysis (MGA), the key idea is to test whether path relationships differ across groups such as male vs. female. The process runs bootstrapping separately for each group and then compares path coefficients. In the example, collaborative culture shows a negative effect on role ambiguity in both groups, but the effect is insignificant for females (driven by higher standard error and smaller group representation) and stronger for males. MGA also checks whether mediation differs across groups; here, collaborative culture does not mediate through the proposed pathway in either group. A second MGA approach—bootstrap MGA—tests whether differences in path coefficients are statistically significant; most differences are not significant except for one relationship involving how IM influences CC.
The webinar then scales up to a complex model combining reflective and formative constructs, higher-order constructs, mediators, and a higher-order moderator. The model is built by first placing lower-order constructs (the subdimensions) on the canvas, then linking them to their higher-order constructs. For reflective–reflective higher-order constructs (e.g., internal service quality), validation follows the same logic as ordinary measurement models: outer loadings, reliability, AVE, and discriminant validity. For reflective–formative higher-order constructs (e.g., internal marketing and role stress), the process adds collinearity checks (VIF) and bootstrapping to validate formative weights. Even when a formative weight’s p-value is not significant, the guidance is not to automatically delete indicators; instead, confirm that outer loadings are significant and that the construct meets the broader measurement-quality thresholds.
Finally, the structural model is bootstrapped (e.g., 10,000 resamples) to test direct effects, mediation, and moderation. Direct paths are interpreted using p-values and effect sizes (beta coefficients). Mediation is classified as partial when both direct and indirect effects are significant, and as full when the direct effect becomes insignificant while the indirect effect remains significant. Moderation is handled via interaction effects; if moderation is insignificant, slope analysis is unnecessary. The workflow can also incorporate control variables (age, gender, job rank), with the recommendation to avoid dummy variables for binary or ordinal variables and to compare results with and without controls to confirm whether the model changes meaningfully. The overall takeaway is that complex SmartPLS modeling is less about shortcuts and more about a repeatable quality-control pipeline: clean data, validate measurement models at every construct level, then interpret structural hypotheses with bootstrapped inference.
Cornell Notes
SmartPLS modeling in this workflow follows a strict order: clean the data, validate the measurement model, then test the structural model. Measurement quality is checked through factor loadings, reliability (Cronbach’s alpha and composite reliability), convergent validity (AVE > 0.50), and discriminant validity (HTMT and Fornell–Larcker; cross-loadings as a diagnostic). Multi-group analysis (MGA) compares path coefficients across groups like male vs. female using bootstrapping; differences are judged by significance, and mediation differences can be tested separately. For complex models, reflective–reflective and reflective–formative higher-order constructs are validated differently: reflective parts use loadings/AVE logic, while formative parts require VIF checks and bootstrapped weight significance. Structural results then classify direct effects, mediation (partial vs. full), and moderation (including when slope analysis is unnecessary).
Why does the workflow insist on validating the measurement model before running structural hypothesis tests?
How does MGA in SmartPLS determine whether a relationship differs between male and female respondents?
What distinguishes partial from full mediation in the structural model results?
How are reflective–reflective vs. reflective–formative higher-order constructs validated differently?
When is slope analysis for moderation unnecessary?
How should control variables be handled in SmartPLS when they’re ordinal or binary?
Review Questions
- What specific thresholds and tests are used to establish convergent validity and discriminant validity in this workflow?
- In MGA, how do you interpret a stronger path coefficient in one group when the group-specific path is still statistically insignificant?
- For a reflective–formative higher-order construct, what additional diagnostics are required beyond reflective measurement checks?
Key Points
- 1
Start with data screening: check min/max values, missing data, outliers, and respondent misconduct using standard-deviation criteria before any modeling.
- 2
Validate the measurement model in order: factor loadings, reliability (Cronbach’s alpha and composite reliability), convergent validity (AVE > 0.50), then discriminant validity (HTMT and Fornell–Larcker; cross-loadings as a diagnostic).
- 3
Use multi-group analysis with bootstrapping to test whether path relationships differ across groups; judge both group-specific significance and the significance of the difference.
- 4
For mediation, classify partial vs. full mediation by whether direct effects remain significant after including the mediator, alongside the significance of indirect effects.
- 5
Validate higher-order constructs at the correct level: reflective–reflective uses loadings/AVE logic, while reflective–formative requires VIF checks and bootstrapped formative weights.
- 6
Don’t rely on moderation slope analysis when the moderation effect is insignificant; interpret moderation only when the interaction path is significant.
- 7
When adding control variables, compare models with and without controls to confirm whether key relationships meaningfully change; avoid dummy variables for ordinal/binary variables.