How to Report #SmartPLS4 Results in a Research Paper
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Open the results section with a compact description of the SEM workflow, clearly distinguishing measurement model versus structural model outputs.
Briefing
SmartPLS results reporting in a research paper hinges on a clear, repeatable structure: introduce the analysis approach, document any data screening/cleaning, then report measurement-model quality (reliability, convergent validity, discriminant validity) before moving to structural-model outputs (path coefficients, mediation/moderation, and predictive relevance). The payoff is straightforward—reviewers can quickly verify that constructs are measured well and that hypotheses are tested with the right statistics.
The first section of the “Data Analysis and Results” write-up should briefly describe the techniques used within structural equation modeling (without turning into a methods textbook). Structural equation modeling is split into a measurement model and a structural model. The measurement model focuses on construct quality—how well items represent constructs—while the structural model focuses on hypothesis testing—how constructs relate to each other. In practice, this often appears as a short “analysis procedure” paragraph that names the tool (SmartPLS), outlines the workflow, and then points to the specific outputs that will be reported.
A second step appears when reviewers ask for it: data screening and cleaning. This can be summarized in a few lines or a short paragraph. Typical checks include verifying that values fall within acceptable ranges (frequency/min–max), assessing skewness and kurtosis to avoid extreme normality violations, checking outliers via standardized values or box plots, and evaluating common method bias using variance inflation factors (VIF) or tests such as the Harman single-factor test. If issues arise, the write-up should state what was changed or how the dataset was corrected.
Once the measurement model is ready, reporting becomes more systematic. For reflective constructs, the core outputs include outer loadings (item loadings), collinearity diagnostics (often VIF with a common threshold of < 5), construct reliability (alpha, composite reliability), and convergent validity. These results are typically organized into one or a small number of tables, with the text explicitly referencing the tables and any thresholds used.
Discriminant validity follows, commonly reported using Fornell–Larcker criteria and/or HTMT (heterotrait–monotrait ratio). The exact choice depends on what the study reports; the key is to present the criterion and apply the relevant threshold logic.
Higher-order constructs add a reporting twist. For reflective–reflective higher-order constructs, the same measurement-model checks apply, but details can be compressed into fewer tables. For reflective–formative higher-order constructs, reliability and validity metrics used for reflective measures are not reported in the same way; instead, the emphasis shifts to VIF, outer weights, and outer loadings. Bootstrapping is used to obtain significance testing for these higher-order formative components, with the transcript noting bootstrapping recommendations such as 5,000–10,000 iterations (and also showing smaller runs for speed during demonstration).
After measurement-model assessment, the structural model is reported through bootstrapped path coefficients: original sample estimates, standard deviations, t-statistics, and p-values for direct effects. If moderation is included, bootstrapping is run again and moderation results are reported similarly. Mediation results are reported as specific indirect effects. Finally, predictive relevance and explanatory power are documented using Q² (via PLS predict) and R² and f² values, with Q² values greater than zero interpreted as evidence that predictors are relevant for predicting the outcome.
Cornell Notes
SmartPLS reporting in research papers should follow a consistent order: (1) briefly describe the analysis procedure within structural equation modeling, (2) optionally document data screening/cleaning if requested, (3) assess the measurement model, and only then (4) report the structural model results. For reflective constructs, the measurement model write-up typically includes outer loadings, VIF (often using < 5 as a guideline), reliability (alpha and composite reliability), convergent validity, and discriminant validity using Fornell–Larcker and/or HTMT. Higher-order constructs require tailored reporting: reflective–reflective can be summarized with the same core checks, while reflective–formative shifts focus toward VIF, outer weights, and outer loadings (with bootstrapping for significance). Structural-model reporting centers on bootstrapped path coefficients, plus mediation/moderation outputs and predictive relevance (Q²), alongside R² and f².
What should appear first in a SmartPLS “Data Analysis and Results” section, and why does that matter for reviewers?
When data screening and cleaning is required, what checks are typically summarized and how can they be condensed?
For reflective measurement models, which statistics are the core reporting set?
How is discriminant validity reported in SmartPLS results, and what are the common options mentioned?
How does reporting change for higher-order constructs, especially reflective–formative combinations?
What structural-model outputs should be reported after measurement-model assessment?
Review Questions
- Which measurement-model statistics would you include for a purely reflective construct, and which ones would you use to justify discriminant validity?
- If your model includes a reflective–formative higher-order construct, what changes in what you report compared with reflective–reflective?
- How do you interpret Q² in SmartPLS reporting, and where do R² and f² fit into the structural-model results section?
Key Points
- 1
Open the results section with a compact description of the SEM workflow, clearly distinguishing measurement model versus structural model outputs.
- 2
Summarize data screening/cleaning only when needed, using short checks for range, skewness/kurtosis, outliers, and common method bias (e.g., VIF or Harman single-factor).
- 3
For reflective constructs, report outer loadings, VIF (with a common guideline of < 5), and reliability (alpha and composite reliability), then present convergent validity.
- 4
Report discriminant validity using Fornell–Larcker and/or HTMT, and ensure the text references the tables and thresholds used.
- 5
Tailor higher-order construct reporting: reflective–reflective can be summarized with the usual reflective metrics, while reflective–formative emphasizes VIF, outer weights, and outer loadings with bootstrapping.
- 6
For structural models, report bootstrapped path coefficients (original sample, standard deviation, t-statistics, p-values), then add mediation/moderation outputs as applicable.
- 7
Close structural reporting with predictive relevance (Q² via PLS predict) and explanatory power (R² and f²), using the transcript’s interpretation that Q² > 0 indicates relevance.