CBSEM using #SmartPLS4 | 12 | Report Measurement Model Results
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Report model fit using χ²/df, GFI, CFI, TLI, SRMR, and RMSEA, and state that values were evaluated against accepted thresholds.
Briefing
Reporting a SmartPLS4 measurement model isn’t just about listing numbers—it’s about presenting fit, reliability, validity, and discriminant checks in a structured way that matches accepted thresholds. After running the model (via Calculate Basic Algorithm), the results should be reported using standardized outer loadings, a set of model-fit indices, reliability metrics (Cronbach’s alpha and composite reliability), convergent validity via AVE, and discriminant validity using both the Fornell–Larcker criterion and HTMT.
The first reporting block is model fit. The transcript recommends reporting a confirmatory factor analysis-style fit summary using SEM-style indices: SRMR, GFI, CFI, TLI, RMSEA, and the ratio of chi-square to degrees of freedom (often written as χ²/df). The write-up should state that the measurement model was assessed in SmartPLS4 and that fit measures were evaluated against commonly accepted acceptance levels. If any items were removed due to low factor loadings, the report should explicitly note which item codes were removed and why; otherwise, it should proceed with the retained items.
Next comes the measurement quality of the constructs through factor loadings and reliability. Outer loadings should be reported using standardized metrics. The transcript uses a practical decision rule: items aren’t automatically deleted just because loadings fall below a “typical” 0.70 benchmark; deletion should happen only if removing an item improves reliability and validity statistics. In the example, loadings are already above the required threshold (shown as green in SmartPLS4), so no items are removed.
Reliability is then established using Cronbach’s alpha and composite reliability. The transcript emphasizes that both should exceed the commonly used 0.70 threshold. Composite reliability is reported with its standardized and/or unstandardized values, but the key claim is that the construct-level reliability meets the acceptance limit.
Convergent validity follows under the umbrella of construct validity. Average Variance Extracted (AVE) is the metric used, with the standard requirement that AVE should be above 0.50. Once AVE meets that threshold for each construct, convergent validity is considered established.
Finally, discriminant validity is checked using two approaches. Under the Fornell–Larcker criterion, discriminant validity is supported when the square root of each construct’s AVE is greater than its correlations with other constructs. Under HTMT (HTMT ratio), discriminant validity is supported when all HTMT ratios fall below the 0.85 limit. The transcript recommends presenting Fornell–Larcker results and HTMT results in separate tables (e.g., Table 3 for Fornell–Larcker and Table 4 for HTMT), along with a short note clarifying what the bold/italic values represent (square roots of AVE).
Overall, the reporting template is clear: one table for fit indices, one for outer loadings plus reliability and AVE, and two tables for discriminant validity (Fornell–Larcker and HTMT). The transcript also advises copying values from SmartPLS4 into Excel for clean table formatting, since direct copy into Word may produce unstructured text.
Cornell Notes
SmartPLS4 measurement-model reporting should be organized around four validation layers: model fit, factor loadings, reliability/validity, and discriminant validity. Fit is summarized with SEM-style indices such as χ²/df, GFI, CFI, TLI, SRMR, and RMSEA, reported alongside the acceptance thresholds used. Measurement quality is documented with standardized outer loadings, then construct reliability using Cronbach’s alpha and composite reliability (both typically required to exceed 0.70). Convergent validity is confirmed through AVE, which should be above 0.50. Discriminant validity is established using both Fornell–Larcker (square root of AVE greater than cross-construct correlations) and HTMT ratio (all ratios below 0.85), usually placed in separate tables.
What set of statistics should appear first when reporting a SmartPLS4 measurement model?
How should factor loadings be handled if some are below 0.70?
Which metrics establish construct reliability, and what threshold is used?
How is convergent validity demonstrated in this reporting template?
What two methods are used for discriminant validity, and what are their decision rules?
Review Questions
- Which fit indices (names) are recommended for the model-fit reporting section, and what does χ²/df represent in that list?
- Under what conditions should an item be deleted when its factor loading is below 0.70?
- What exact thresholds are used for AVE (convergent validity) and HTMT ratio (discriminant validity) in this template?
Key Points
- 1
Report model fit using χ²/df, GFI, CFI, TLI, SRMR, and RMSEA, and state that values were evaluated against accepted thresholds.
- 2
Use standardized outer loadings and list any removed items by item code when low loadings trigger deletion.
- 3
Do not delete items just because loadings are below 0.70; remove items only if doing so improves reliability and validity outcomes.
- 4
Establish construct reliability with both Cronbach’s alpha and composite reliability, typically requiring values above 0.70.
- 5
Confirm convergent validity using AVE, requiring AVE values above 0.50.
- 6
Establish discriminant validity with both Fornell–Larcker (square root of AVE exceeds cross-construct correlations) and HTMT (all ratios below 0.85).
- 7
Present discriminant validity results in separate tables for Fornell–Larcker and HTMT, with a note clarifying what the bold/italic values represent.