6. How to Structure, Format, and Report SmartPLS Results in a Thesis/Dissertation
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start the results chapter with a clear overview stating that measurement-model assessment comes first (reliability/validity), followed by structural-model hypothesis testing.
Briefing
SmartPLS results in a thesis/dissertation are typically reported in a structured sequence: start with an overview of what the chapter will cover, then document measurement-model quality (reliability and validity), move to model fit and predictive relevance, and finish with hypothesis testing results. The practical payoff is that readers can quickly verify that constructs were measured properly before trusting the structural relationships and effect estimates.
The chapter usually begins with a brief introduction explaining that the analysis first assesses the measurement model—covering reliability and validity—then shifts to the structural model, where hypotheses are tested. Many theses also include respondent demographics and any data issues that matter for interpretation, such as missing data, normality concerns, or common method considerations. From there, the measurement model section is commonly organized to follow a logical checklist: reliability, convergent validity, discriminant validity, and (when applicable) higher-order construct validation.
Reliability reporting often starts with indicator reliability using factor loadings (with thresholds discussed as a minimum factor loading requirement). Reliability is then assessed using measures such as Cronbach’s alpha and composite reliability. If indicator reliability is weak, items may be deleted—but only when deletion improves key statistics like composite reliability and average variance extracted (AVE). The transcript emphasizes documenting any item deletions and the rationale, such as high variance inflation factor (VIF) values (noted as above 5 or 10) or low factor loadings (noted as below 0.5 or 0.4), especially when removal meaningfully improves composite reliability and AVE.
After reliability, convergent validity is reported through AVE, including whether it meets the expected threshold and whether any issues arose. Discriminant validity is then established using multiple approaches available in SmartPLS: the Fornell–Larcker criterion (comparing the square root of AVE with inter-construct correlations), cross-loadings (showing indicators load highest on their intended construct), and HTMT (heterotrait–monotrait ratio), including the threshold used and whether results pass. Tables are expected to be referenced in the text so the narrative and outputs stay aligned.
Structural-model reporting follows once the measurement model is credible. Because SmartPLS emphasizes prediction rather than traditional “goodness-of-fit,” the transcript highlights reporting R² (variance explained in endogenous variables), f² (effect size via changes in R² when an exogenous predictor is removed), and Q² (predictive relevance, with the rule-of-thumb that Q² > 0 indicates relevance). Model fit is commonly summarized using SRMR, with a threshold such as SRMR < 0.8. Hypothesis results are then presented using path coefficients (beta), standard deviations, t statistics, and p values. Mediation and moderation are handled separately from direct paths: mediation reporting should include total, direct, and indirect effects, along with whether mediation is partial, full, or absent.
Higher-order constructs add another layer. For reflective–reflective higher-order models, the same reliability and validity criteria apply, while reflective–formative higher-order models require reporting outer weights, their t statistics and p values, outer loadings, and VIF values. The overall pattern—measurement quality first, then predictive and structural results—keeps the thesis defensible and easy to audit.
Cornell Notes
SmartPLS thesis reporting usually follows a consistent order: introduce the chapter, assess the measurement model (reliability and validity), then report structural results (R², f², Q², SRMR), and finally present hypothesis tests. Reliability is documented using factor loadings for indicator reliability plus Cronbach’s alpha and composite reliability; weak indicators can be deleted only when doing so improves composite reliability and AVE. Convergent validity is reported via AVE, while discriminant validity is checked using Fornell–Larcker, cross-loadings, and HTMT with stated thresholds. Structural-model reporting emphasizes prediction: R² for explained variance, f² for effect size, Q² for predictive relevance, and SRMR for model fit. Mediation results should include total, direct, and indirect effects and classify mediation as partial/full/none.
What is the typical structure of a thesis results chapter when using SmartPLS?
How should reliability and indicator quality be reported in SmartPLS?
What are the main ways to demonstrate discriminant validity in SmartPLS?
What predictive and fit statistics matter most in SmartPLS structural-model reporting?
How should mediation results be presented compared with direct effects?
How do reporting requirements change for higher-order constructs (reflective–reflective vs reflective–formative)?
Review Questions
- When writing your measurement model section, what order of reporting (reliability → convergent validity → discriminant validity) best supports a reader’s trust in the constructs?
- Which discriminant validity tests would you report together, and what threshold logic should you state for each?
- For a SmartPLS structural model, how do R², f², and Q² complement each other, and where does SRMR fit in?
Key Points
- 1
Start the results chapter with a clear overview stating that measurement-model assessment comes first (reliability/validity), followed by structural-model hypothesis testing.
- 2
Report indicator reliability using factor loadings, then document reliability using Cronbach’s alpha and composite reliability.
- 3
Justify any item deletions with concrete criteria (e.g., low factor loadings) and show that deletion improves composite reliability and AVE.
- 4
Demonstrate convergent and discriminant validity using AVE plus multiple discriminant checks (Fornell–Larcker, cross-loadings, and HTMT) with stated thresholds.
- 5
For structural results, emphasize prediction: report R², f², and Q², and include SRMR as a common model-fit summary.
- 6
Present hypothesis testing with beta coefficients, standard deviations, t statistics, and p values, and separate mediation/moderation reporting from direct paths.
- 7
Handle higher-order constructs according to their type: reflective–reflective uses the usual reliability/validity criteria, while reflective–formative requires outer weights, their significance tests, outer loadings, and VIF.