#SmartPLS4 Series - 43 - Report SmartPLS4 Results
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start with measurement model reporting: factor loadings, Cronbach’s alpha (α), composite reliability (CR), and AVE for convergent validity.
Briefing
Reporting SmartPLS results in a research paper boils down to a two-stage workflow: document the measurement model first (reliability, convergent validity, discriminant validity), then report the structural model outcomes (path coefficients and hypothesis tests). The practical payoff is that readers can judge whether the constructs were measured well before trusting any claims about relationships among constructs.
For a model with only lower-order constructs, the first step is measurement model assessment using the PLS-SEM algorithm with path standardization and the default settings. The key outputs to extract are factor loadings, construct reliability, and convergent validity. Factor loadings can be exported as an image from SmartPLS, but for paper-ready tables they’re typically copied into Excel (either as a matrix or list) and reorganized into a clean table. Construct reliability is reported using Cronbach’s alpha (α) and composite reliability (CR). Convergent validity is assessed via the Average Variance Extracted (AVE), often denoted as “a” in the transcript’s notation; AVE is the construct-validity metric tied to whether indicators share enough variance with their underlying construct.
Next comes discriminant validity. The transcript highlights two common approaches: the Fornell–Larcker criterion and the HTMT (heterotrait–monotrait) ratio. Both can be reported, though HTMT is described as the more widely used criterion in current practice. In practice, these discriminant validity matrices/values are also exported to Excel and then copied into the manuscript. The transcript notes that cross-loadings are not usually reported, but they may appear in some papers when the reporting style calls for it.
After the measurement model is documented, the structural model assessment follows. Hypothesis testing is performed via bootstrapping. The transcript recommends 10,000 resamples in general, but uses smaller numbers for speed in the example; it also mentions bias-corrected and accelerated bootstrapping (BCa) and one-tailed testing when the direction of relationships is known. The structural results to report are the path coefficients (β), standard deviations, t-statistics, and p-values. Each hypothesized relationship should be labeled with whether it is supported or not based on the p-value threshold (the transcript uses p > 0.05 as insignificant). Mediation results can be reported as specific indirect effects, but only those that fit the study’s conceptual scope rather than every possible mediation path.
For models that include a higher-order construct—illustrated with University Social Responsibility (USR) as a higher-order construct composed of three subdimensions (ethical responsibility, research and development responsibility, and philanthropic responsibility)—the workflow expands. Lower-order constructs must be validated first (report their loadings, α, CR, and AVE). Then the higher-order construct itself must be validated, with the required metrics depending on whether the higher-order construct is reflective-formative or reflective-reflective. In the example, USR is treated as a reflective-formative higher-order construct, so the transcript calls for reporting VIF, outer loadings, and p-values for outer loadings, plus t- and p-values for the relevant outer weights/paths (including “aovs” and “out of wids” as named in the transcript). Only after both levels are validated does hypothesis testing proceed, again via bootstrapping, with results reported for the higher-order relationships (e.g., overall and by country samples such as Pakistan and China).
Cornell Notes
SmartPLS reporting follows a measurement-then-structure sequence. First, extract factor loadings, construct reliability (Cronbach’s alpha and composite reliability), and convergent validity (AVE) from the measurement model. Then establish discriminant validity, commonly using HTMT (and optionally Fornell–Larcker), and report the relevant values in tables. After the measurement model is credible, run bootstrapping to test hypotheses in the structural model, reporting β, standard deviation, t-statistics, and p-values, and noting whether each hypothesis is supported. If a higher-order construct exists (e.g., USR built from subdimensions), validate all lower-order constructs first, validate the higher-order construct using the correct reflective/formative procedure, and only then test structural hypotheses.
What are the minimum measurement-model results that should appear in a paper for a model with only lower-order constructs?
How should discriminant validity be handled in SmartPLS reporting?
What structural-model statistics are required for hypothesis reporting in SmartPLS?
Why does higher-order construct reporting require extra steps beyond lower-order validation?
What additional metrics are mentioned for validating a reflective-formative higher-order construct like USR?
Review Questions
- In what order should measurement model and structural model results be reported, and what specific metrics belong to each stage?
- When HTMT and Fornell–Larcker both appear in a SmartPLS output, what does each serve to demonstrate about discriminant validity?
- For a higher-order construct built from subdimensions, which constructs must be validated first, and what extra validation is required for the higher-order level?
Key Points
- 1
Start with measurement model reporting: factor loadings, Cronbach’s alpha (α), composite reliability (CR), and AVE for convergent validity.
- 2
Establish discriminant validity using HTMT as the primary criterion, with Fornell–Larcker as an optional additional report.
- 3
Use bootstrapping to test structural hypotheses and report β, standard deviation, t-statistics, and p-values for each path.
- 4
State hypothesis support explicitly based on p-value thresholds (e.g., p > 0.05 treated as insignificant in the transcript).
- 5
For mediation, report only the indirect effects that fit the study’s conceptual scope rather than every possible mediation path.
- 6
If a higher-order construct exists, validate all lower-order constructs first, then validate the higher-order construct using the correct reflective/formative procedure before testing hypotheses.
- 7
When reporting, export SmartPLS outputs and reorganize them into Word-ready tables via Excel for clean presentation of loadings and validity metrics.