Get AI summaries of any video or article — Sign up free
How to Report #SmartPLS4 Results in a Research Paper thumbnail

How to Report #SmartPLS4 Results in a Research Paper

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Open the results section with a compact description of the SEM workflow, clearly distinguishing measurement model versus structural model outputs.

Briefing

SmartPLS results reporting in a research paper hinges on a clear, repeatable structure: introduce the analysis approach, document any data screening/cleaning, then report measurement-model quality (reliability, convergent validity, discriminant validity) before moving to structural-model outputs (path coefficients, mediation/moderation, and predictive relevance). The payoff is straightforward—reviewers can quickly verify that constructs are measured well and that hypotheses are tested with the right statistics.

The first section of the “Data Analysis and Results” write-up should briefly describe the techniques used within structural equation modeling (without turning into a methods textbook). Structural equation modeling is split into a measurement model and a structural model. The measurement model focuses on construct quality—how well items represent constructs—while the structural model focuses on hypothesis testing—how constructs relate to each other. In practice, this often appears as a short “analysis procedure” paragraph that names the tool (SmartPLS), outlines the workflow, and then points to the specific outputs that will be reported.

A second step appears when reviewers ask for it: data screening and cleaning. This can be summarized in a few lines or a short paragraph. Typical checks include verifying that values fall within acceptable ranges (frequency/min–max), assessing skewness and kurtosis to avoid extreme normality violations, checking outliers via standardized values or box plots, and evaluating common method bias using variance inflation factors (VIF) or tests such as the Harman single-factor test. If issues arise, the write-up should state what was changed or how the dataset was corrected.

Once the measurement model is ready, reporting becomes more systematic. For reflective constructs, the core outputs include outer loadings (item loadings), collinearity diagnostics (often VIF with a common threshold of < 5), construct reliability (alpha, composite reliability), and convergent validity. These results are typically organized into one or a small number of tables, with the text explicitly referencing the tables and any thresholds used.

Discriminant validity follows, commonly reported using Fornell–Larcker criteria and/or HTMT (heterotrait–monotrait ratio). The exact choice depends on what the study reports; the key is to present the criterion and apply the relevant threshold logic.

Higher-order constructs add a reporting twist. For reflective–reflective higher-order constructs, the same measurement-model checks apply, but details can be compressed into fewer tables. For reflective–formative higher-order constructs, reliability and validity metrics used for reflective measures are not reported in the same way; instead, the emphasis shifts to VIF, outer weights, and outer loadings. Bootstrapping is used to obtain significance testing for these higher-order formative components, with the transcript noting bootstrapping recommendations such as 5,000–10,000 iterations (and also showing smaller runs for speed during demonstration).

After measurement-model assessment, the structural model is reported through bootstrapped path coefficients: original sample estimates, standard deviations, t-statistics, and p-values for direct effects. If moderation is included, bootstrapping is run again and moderation results are reported similarly. Mediation results are reported as specific indirect effects. Finally, predictive relevance and explanatory power are documented using Q² (via PLS predict) and R² and f² values, with Q² values greater than zero interpreted as evidence that predictors are relevant for predicting the outcome.

Cornell Notes

SmartPLS reporting in research papers should follow a consistent order: (1) briefly describe the analysis procedure within structural equation modeling, (2) optionally document data screening/cleaning if requested, (3) assess the measurement model, and only then (4) report the structural model results. For reflective constructs, the measurement model write-up typically includes outer loadings, VIF (often using < 5 as a guideline), reliability (alpha and composite reliability), convergent validity, and discriminant validity using Fornell–Larcker and/or HTMT. Higher-order constructs require tailored reporting: reflective–reflective can be summarized with the same core checks, while reflective–formative shifts focus toward VIF, outer weights, and outer loadings (with bootstrapping for significance). Structural-model reporting centers on bootstrapped path coefficients, plus mediation/moderation outputs and predictive relevance (Q²), alongside R² and f².

What should appear first in a SmartPLS “Data Analysis and Results” section, and why does that matter for reviewers?

Start with a short introduction of the techniques used in structural equation modeling, naming the measurement model versus structural model at a high level. The goal is to orient readers without re-teaching SEM. Then describe the analysis procedure in a compact paragraph: what was run in SmartPLS (e.g., PLS-SEM algorithm and bootstrapping) and what outputs will be reported next. This helps reviewers quickly map the reported tables to the workflow.

When data screening and cleaning is required, what checks are typically summarized and how can they be condensed?

A reviewer-driven screening section can be limited to four to five lines or one short paragraph. Common checks include: (1) frequency and min–max range checks, (2) skewness and kurtosis to detect extreme normality violations, (3) outlier assessment via standardized values or box plots, and (4) common method bias evaluation using VIF or the Harman single-factor test. If problems are found, the write-up should state what cleaning steps were applied.

For reflective measurement models, which statistics are the core reporting set?

Reflective measurement-model assessment typically reports: outer loadings (item loadings), VIF for collinearity diagnostics (often using a threshold like < 5), and construct reliability (alpha and composite reliability). Convergent validity is reported alongside these reliability metrics. Results are usually organized into one table (or a small set of tables) and referenced explicitly in the text.

How is discriminant validity reported in SmartPLS results, and what are the common options mentioned?

Discriminant validity is commonly reported using Fornell–Larcker criteria and/or HTMT (heterotrait–monotrait ratio). The transcript notes that some studies may report Fornell–Larcker without HTMT, while others include both. The key is to present the criterion values and apply the relevant threshold logic, then reference the table in the narrative.

How does reporting change for higher-order constructs, especially reflective–formative combinations?

For reflective–reflective higher-order constructs, the process largely mirrors reflective reporting (loadings, reliability, validity), but details can be summarized to avoid word overload. For reflective–formative higher-order constructs, reliability and validity metrics used for reflective measures are not reported the same way; instead, the focus is on VIF, outer weights, and outer loadings for the higher-order formative component. Bootstrapping is used to generate significance testing for these formative parts (with the transcript mentioning 5,000–10,000 as a typical recommendation).

What structural-model outputs should be reported after measurement-model assessment?

Structural-model reporting centers on bootstrapped path coefficients for direct effects: original sample, standard deviation, t-statistics, and p-values. If moderation exists, moderation results are reported using the same bootstrapped logic. Mediation is reported via specific indirect effects. Predictive relevance and explanatory power are then documented using Q² (via PLS predict) and R² and f² values, with Q² greater than zero interpreted as predictors being relevant for predicting the outcome.

Review Questions

  1. Which measurement-model statistics would you include for a purely reflective construct, and which ones would you use to justify discriminant validity?
  2. If your model includes a reflective–formative higher-order construct, what changes in what you report compared with reflective–reflective?
  3. How do you interpret Q² in SmartPLS reporting, and where do R² and f² fit into the structural-model results section?

Key Points

  1. 1

    Open the results section with a compact description of the SEM workflow, clearly distinguishing measurement model versus structural model outputs.

  2. 2

    Summarize data screening/cleaning only when needed, using short checks for range, skewness/kurtosis, outliers, and common method bias (e.g., VIF or Harman single-factor).

  3. 3

    For reflective constructs, report outer loadings, VIF (with a common guideline of < 5), and reliability (alpha and composite reliability), then present convergent validity.

  4. 4

    Report discriminant validity using Fornell–Larcker and/or HTMT, and ensure the text references the tables and thresholds used.

  5. 5

    Tailor higher-order construct reporting: reflective–reflective can be summarized with the usual reflective metrics, while reflective–formative emphasizes VIF, outer weights, and outer loadings with bootstrapping.

  6. 6

    For structural models, report bootstrapped path coefficients (original sample, standard deviation, t-statistics, p-values), then add mediation/moderation outputs as applicable.

  7. 7

    Close structural reporting with predictive relevance (Q² via PLS predict) and explanatory power (R² and f²), using the transcript’s interpretation that Q² > 0 indicates relevance.

Highlights

A clean reporting order matters: measurement-model quality checks come before structural-model hypothesis testing outputs.
Discriminant validity can be handled with Fornell–Larcker and/or HTMT; the write-up should match what was actually computed.
Higher-order constructs change the reporting emphasis—reflective–formative models shift attention toward VIF, outer weights, and outer loadings rather than reflective reliability/validity metrics.
Predictive relevance is assessed with Q² (via PLS predict), and Q² values greater than zero are treated as evidence of relevance.

Topics

Mentioned

  • PLS
  • SEM
  • VIF
  • HTMT
  • PLS predict