Get AI summaries of any video or article — Sign up free
6. How to Structure, Format, and Report SmartPLS Results in a Thesis/Dissertation thumbnail

6. How to Structure, Format, and Report SmartPLS Results in a Thesis/Dissertation

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start the results chapter with a clear overview stating that measurement-model assessment comes first (reliability/validity), followed by structural-model hypothesis testing.

Briefing

SmartPLS results in a thesis/dissertation are typically reported in a structured sequence: start with an overview of what the chapter will cover, then document measurement-model quality (reliability and validity), move to model fit and predictive relevance, and finish with hypothesis testing results. The practical payoff is that readers can quickly verify that constructs were measured properly before trusting the structural relationships and effect estimates.

The chapter usually begins with a brief introduction explaining that the analysis first assesses the measurement model—covering reliability and validity—then shifts to the structural model, where hypotheses are tested. Many theses also include respondent demographics and any data issues that matter for interpretation, such as missing data, normality concerns, or common method considerations. From there, the measurement model section is commonly organized to follow a logical checklist: reliability, convergent validity, discriminant validity, and (when applicable) higher-order construct validation.

Reliability reporting often starts with indicator reliability using factor loadings (with thresholds discussed as a minimum factor loading requirement). Reliability is then assessed using measures such as Cronbach’s alpha and composite reliability. If indicator reliability is weak, items may be deleted—but only when deletion improves key statistics like composite reliability and average variance extracted (AVE). The transcript emphasizes documenting any item deletions and the rationale, such as high variance inflation factor (VIF) values (noted as above 5 or 10) or low factor loadings (noted as below 0.5 or 0.4), especially when removal meaningfully improves composite reliability and AVE.

After reliability, convergent validity is reported through AVE, including whether it meets the expected threshold and whether any issues arose. Discriminant validity is then established using multiple approaches available in SmartPLS: the Fornell–Larcker criterion (comparing the square root of AVE with inter-construct correlations), cross-loadings (showing indicators load highest on their intended construct), and HTMT (heterotrait–monotrait ratio), including the threshold used and whether results pass. Tables are expected to be referenced in the text so the narrative and outputs stay aligned.

Structural-model reporting follows once the measurement model is credible. Because SmartPLS emphasizes prediction rather than traditional “goodness-of-fit,” the transcript highlights reporting R² (variance explained in endogenous variables), f² (effect size via changes in R² when an exogenous predictor is removed), and Q² (predictive relevance, with the rule-of-thumb that Q² > 0 indicates relevance). Model fit is commonly summarized using SRMR, with a threshold such as SRMR < 0.8. Hypothesis results are then presented using path coefficients (beta), standard deviations, t statistics, and p values. Mediation and moderation are handled separately from direct paths: mediation reporting should include total, direct, and indirect effects, along with whether mediation is partial, full, or absent.

Higher-order constructs add another layer. For reflective–reflective higher-order models, the same reliability and validity criteria apply, while reflective–formative higher-order models require reporting outer weights, their t statistics and p values, outer loadings, and VIF values. The overall pattern—measurement quality first, then predictive and structural results—keeps the thesis defensible and easy to audit.

Cornell Notes

SmartPLS thesis reporting usually follows a consistent order: introduce the chapter, assess the measurement model (reliability and validity), then report structural results (R², f², Q², SRMR), and finally present hypothesis tests. Reliability is documented using factor loadings for indicator reliability plus Cronbach’s alpha and composite reliability; weak indicators can be deleted only when doing so improves composite reliability and AVE. Convergent validity is reported via AVE, while discriminant validity is checked using Fornell–Larcker, cross-loadings, and HTMT with stated thresholds. Structural-model reporting emphasizes prediction: R² for explained variance, f² for effect size, Q² for predictive relevance, and SRMR for model fit. Mediation results should include total, direct, and indirect effects and classify mediation as partial/full/none.

What is the typical structure of a thesis results chapter when using SmartPLS?

A common pattern starts with a short introduction describing what the chapter contains. It then moves to the measurement model—reliability and validity—often after demographics and any data issues (missing data, normality, common method concerns). Next comes structural-model reporting, including R², f², Q², and SRMR. The chapter ends with hypothesis testing results (path coefficients with t statistics and p values), with mediation/moderation reported in separate sections.

How should reliability and indicator quality be reported in SmartPLS?

Reliability reporting typically begins with indicator reliability using factor loadings, then uses Cronbach’s alpha and composite reliability. If an item’s loading is too low, it may be deleted, but deletion should be justified by improved composite reliability and AVE. The transcript notes example decision thresholds such as factor loadings below 0.5 or 0.4 and VIF values above 5 or 10 as reasons to address problematic indicators.

What are the main ways to demonstrate discriminant validity in SmartPLS?

Discriminant validity is commonly assessed using three methods: (1) Fornell–Larcker criterion by checking whether the square root of AVE for each construct exceeds its correlations with other constructs; (2) cross-loadings, where indicators should load highest on their intended construct compared with other constructs; and (3) HTMT (heterotrait–monotrait ratio), reported with the threshold used and whether results meet it.

What predictive and fit statistics matter most in SmartPLS structural-model reporting?

SmartPLS reporting emphasizes predictive capability. R² quantifies how much variance in endogenous variables is explained. f² measures effect size by examining how much R² changes when an exogenous predictor is removed. Q² indicates predictive relevance, with the rule-of-thumb that Q² > 0 suggests relevance. For model fit, SRMR is frequently reported, with a threshold such as SRMR < 0.8.

How should mediation results be presented compared with direct effects?

Mediation is typically separated from direct relationships. The transcript recommends reporting total effect, direct effect (IV on DV while the mediator is present), and indirect effect (IV on DV through the mediator). It also calls for stating whether mediation is partial, full, or none based on the results.

How do reporting requirements change for higher-order constructs (reflective–reflective vs reflective–formative)?

For reflective–reflective higher-order constructs, the same reliability and validity criteria apply (alpha/composite reliability, convergent validity via AVE, and discriminant validity such as Fornell–Larcker/HTMT). For reflective–formative higher-order constructs, reporting shifts to formative-specific outputs: higher-order construct details plus lower-order constructs, outer weights with t statistics and p values, outer loadings, and VIF values.

Review Questions

  1. When writing your measurement model section, what order of reporting (reliability → convergent validity → discriminant validity) best supports a reader’s trust in the constructs?
  2. Which discriminant validity tests would you report together, and what threshold logic should you state for each?
  3. For a SmartPLS structural model, how do R², f², and Q² complement each other, and where does SRMR fit in?

Key Points

  1. 1

    Start the results chapter with a clear overview stating that measurement-model assessment comes first (reliability/validity), followed by structural-model hypothesis testing.

  2. 2

    Report indicator reliability using factor loadings, then document reliability using Cronbach’s alpha and composite reliability.

  3. 3

    Justify any item deletions with concrete criteria (e.g., low factor loadings) and show that deletion improves composite reliability and AVE.

  4. 4

    Demonstrate convergent and discriminant validity using AVE plus multiple discriminant checks (Fornell–Larcker, cross-loadings, and HTMT) with stated thresholds.

  5. 5

    For structural results, emphasize prediction: report R², f², and Q², and include SRMR as a common model-fit summary.

  6. 6

    Present hypothesis testing with beta coefficients, standard deviations, t statistics, and p values, and separate mediation/moderation reporting from direct paths.

  7. 7

    Handle higher-order constructs according to their type: reflective–reflective uses the usual reliability/validity criteria, while reflective–formative requires outer weights, their significance tests, outer loadings, and VIF.

Highlights

A defensible SmartPLS thesis results chapter treats measurement quality as a prerequisite: reliability and validity come before structural paths.
Discriminant validity is strengthened by using multiple lenses—Fornell–Larcker, cross-loadings, and HTMT—rather than relying on a single check.
SmartPLS structural reporting prioritizes prediction metrics (R², f², Q²) and uses SRMR as a supporting fit indicator.
Mediation reporting should include total, direct, and indirect effects, plus a classification (partial/full/none) rather than only listing path coefficients.
Higher-order constructs require different reporting depending on whether the higher-order block is reflective–reflective or reflective–formative.

Topics

Mentioned

  • PLS
  • CR
  • AVE
  • HTMT
  • VIF
  • SRMR
  • NFI