Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series - 43 - Report SmartPLS4 Results thumbnail

#SmartPLS4 Series - 43 - Report SmartPLS4 Results

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start with measurement model reporting: factor loadings, Cronbach’s alpha (α), composite reliability (CR), and AVE for convergent validity.

Briefing

Reporting SmartPLS results in a research paper boils down to a two-stage workflow: document the measurement model first (reliability, convergent validity, discriminant validity), then report the structural model outcomes (path coefficients and hypothesis tests). The practical payoff is that readers can judge whether the constructs were measured well before trusting any claims about relationships among constructs.

For a model with only lower-order constructs, the first step is measurement model assessment using the PLS-SEM algorithm with path standardization and the default settings. The key outputs to extract are factor loadings, construct reliability, and convergent validity. Factor loadings can be exported as an image from SmartPLS, but for paper-ready tables they’re typically copied into Excel (either as a matrix or list) and reorganized into a clean table. Construct reliability is reported using Cronbach’s alpha (α) and composite reliability (CR). Convergent validity is assessed via the Average Variance Extracted (AVE), often denoted as “a” in the transcript’s notation; AVE is the construct-validity metric tied to whether indicators share enough variance with their underlying construct.

Next comes discriminant validity. The transcript highlights two common approaches: the Fornell–Larcker criterion and the HTMT (heterotrait–monotrait) ratio. Both can be reported, though HTMT is described as the more widely used criterion in current practice. In practice, these discriminant validity matrices/values are also exported to Excel and then copied into the manuscript. The transcript notes that cross-loadings are not usually reported, but they may appear in some papers when the reporting style calls for it.

After the measurement model is documented, the structural model assessment follows. Hypothesis testing is performed via bootstrapping. The transcript recommends 10,000 resamples in general, but uses smaller numbers for speed in the example; it also mentions bias-corrected and accelerated bootstrapping (BCa) and one-tailed testing when the direction of relationships is known. The structural results to report are the path coefficients (β), standard deviations, t-statistics, and p-values. Each hypothesized relationship should be labeled with whether it is supported or not based on the p-value threshold (the transcript uses p > 0.05 as insignificant). Mediation results can be reported as specific indirect effects, but only those that fit the study’s conceptual scope rather than every possible mediation path.

For models that include a higher-order construct—illustrated with University Social Responsibility (USR) as a higher-order construct composed of three subdimensions (ethical responsibility, research and development responsibility, and philanthropic responsibility)—the workflow expands. Lower-order constructs must be validated first (report their loadings, α, CR, and AVE). Then the higher-order construct itself must be validated, with the required metrics depending on whether the higher-order construct is reflective-formative or reflective-reflective. In the example, USR is treated as a reflective-formative higher-order construct, so the transcript calls for reporting VIF, outer loadings, and p-values for outer loadings, plus t- and p-values for the relevant outer weights/paths (including “aovs” and “out of wids” as named in the transcript). Only after both levels are validated does hypothesis testing proceed, again via bootstrapping, with results reported for the higher-order relationships (e.g., overall and by country samples such as Pakistan and China).

Cornell Notes

SmartPLS reporting follows a measurement-then-structure sequence. First, extract factor loadings, construct reliability (Cronbach’s alpha and composite reliability), and convergent validity (AVE) from the measurement model. Then establish discriminant validity, commonly using HTMT (and optionally Fornell–Larcker), and report the relevant values in tables. After the measurement model is credible, run bootstrapping to test hypotheses in the structural model, reporting β, standard deviation, t-statistics, and p-values, and noting whether each hypothesis is supported. If a higher-order construct exists (e.g., USR built from subdimensions), validate all lower-order constructs first, validate the higher-order construct using the correct reflective/formative procedure, and only then test structural hypotheses.

What are the minimum measurement-model results that should appear in a paper for a model with only lower-order constructs?

Report factor loadings, construct reliability, and convergent validity. Factor loadings are exported from SmartPLS and reorganized into a table (often via Excel rather than direct copy into Word). Construct reliability is reported using Cronbach’s alpha (α) and composite reliability (CR). Convergent validity is reported using AVE (the transcript refers to it as “a”), which is part of construct validity assessment alongside reliability.

How should discriminant validity be handled in SmartPLS reporting?

Use discriminant validity criteria and report the resulting values. The transcript highlights HTMT (heterotrait–monotrait ratio) as the commonly used approach today, and also mentions Fornell–Larcker as a previously used criterion. Both can be copied into Excel and then into the manuscript. Cross-loadings are generally not reported, but the transcript notes they may be included in certain papers for presentation purposes.

What structural-model statistics are required for hypothesis reporting in SmartPLS?

After bootstrapping, report each hypothesized path’s β (path coefficient), standard deviation, t-statistic, and p-value. The transcript also emphasizes stating whether each hypothesis is supported or not based on the p-value threshold (it uses p > 0.05 as insignificant). Mediation should be reported as specific indirect effects that match the study’s conceptual scope rather than every possible mediation path.

Why does higher-order construct reporting require extra steps beyond lower-order validation?

Because the higher-order construct must be validated separately from its subdimensions. The transcript’s example treats USR as a higher-order construct made of three subdimensions. The workflow is: validate all lower-order constructs first (loadings, α, CR, AVE), then validate the higher-order construct using the correct reflective/formative method, and only then run bootstrapping to test hypotheses at the higher-order level.

What additional metrics are mentioned for validating a reflective-formative higher-order construct like USR?

The transcript lists VIF and outer loadings with their p-values, plus t-statistics and p-values for the relevant outer weights/paths (it references “aovs” and “out of wids” in the notation). These are reported after validating the lower-order constructs and before hypothesis testing.

Review Questions

  1. In what order should measurement model and structural model results be reported, and what specific metrics belong to each stage?
  2. When HTMT and Fornell–Larcker both appear in a SmartPLS output, what does each serve to demonstrate about discriminant validity?
  3. For a higher-order construct built from subdimensions, which constructs must be validated first, and what extra validation is required for the higher-order level?

Key Points

  1. 1

    Start with measurement model reporting: factor loadings, Cronbach’s alpha (α), composite reliability (CR), and AVE for convergent validity.

  2. 2

    Establish discriminant validity using HTMT as the primary criterion, with Fornell–Larcker as an optional additional report.

  3. 3

    Use bootstrapping to test structural hypotheses and report β, standard deviation, t-statistics, and p-values for each path.

  4. 4

    State hypothesis support explicitly based on p-value thresholds (e.g., p > 0.05 treated as insignificant in the transcript).

  5. 5

    For mediation, report only the indirect effects that fit the study’s conceptual scope rather than every possible mediation path.

  6. 6

    If a higher-order construct exists, validate all lower-order constructs first, then validate the higher-order construct using the correct reflective/formative procedure before testing hypotheses.

  7. 7

    When reporting, export SmartPLS outputs and reorganize them into Word-ready tables via Excel for clean presentation of loadings and validity metrics.

Highlights

A paper-ready SmartPLS report follows a strict sequence: measurement model (reliability/validity) first, then structural model (bootstrapped hypothesis tests).
HTMT is presented as the most commonly used discriminant validity criterion today, with Fornell–Larcker available as a complementary option.
Higher-order constructs require two validation layers: validate subdimensions first, then validate the higher-order construct itself before running hypothesis tests.
Structural hypothesis results should include β, standard deviation, t-statistics, and p-values, plus a clear supported/unsupported call for each hypothesis.

Mentioned

  • PLS-SEM
  • SCM
  • AVE
  • CR
  • HTMT
  • BCa
  • VIF