CBSEM using #SmartPLS4 | 9 | Understand and Interpret Construct Reliability and Convergent Validity
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Construct reliability in SmartPLS is assessed using Cronbach’s alpha and composite reliability, with 0.70 used as a benchmark for modest reliability.
Briefing
Reliability and convergent validity are the next checkpoint after factor loadings and model fit in a SmartPLS measurement model. Construct reliability asks whether the indicators for a latent construct behave consistently—producing stable results across occasions for the same construct—while convergent validity checks whether multiple items that are supposed to measure the same concept actually move together.
Construct reliability is typically assessed using two statistics: composite reliability and Cronbach’s alpha. Both are interpreted against commonly used benchmarks from Hair and Bernstein, with 0.70 treated as a threshold for “modest reliability.” In practice, once the measurement model’s fit and loadings look acceptable, the analysis shifts to these reliability coefficients to confirm that the set of items for each construct is internally consistent. If composite reliability and Cronbach’s alpha both clear the 0.70 benchmark, the construct can be treated as reliable.
Convergent validity then evaluates whether the items for a construct truly converge on the underlying latent variable. The transcript frames this as a degree of agreement among theoretically related indicators—for example, multiple items intended to measure job satisfaction should be statistically related to one another because they reflect the same latent concept. In SmartPLS, convergent validity is assessed using the Average Variance Extracted (AVE). AVE represents how much of the variance in the observed indicators can be explained by the latent variable; an AVE greater than 0.50 is used as empirical evidence that the construct captures more than half of the indicators’ variance, supporting convergent validity.
In the SmartPLS workflow described, the process begins with running a basic CB-SEM/PLS algorithm (the transcript mentions “calculate basic CVM algorithm,” followed by a measurement-model check). Fit statistics are reviewed first: the p-value is expected to be significant for the sample size, the model fit is described as approaching 90 (satisfactory), and the chi-square divided by degrees of freedom is expected to fall in a reasonable range—between 2 and 5.
After confirming that the model fit and loadings are acceptable (with a note that some loadings may be problematic), the reliability and convergent validity results are checked for two constructs labeled CC and OP. For both constructs, Cronbach’s alpha is reported as above 0.70, and composite reliability is also above 0.70. AVE is reported as above 0.50 as well, leading to the conclusion that both constructs are reliable and show convergent validity.
Discriminant validity is flagged as the remaining requirement for full construct validity, but it is deferred to the next session. The practical takeaway is clear: in SmartPLS measurement-model assessment, reliability (Cronbach’s alpha and composite reliability) and convergent validity (AVE) provide the evidence that the indicators consistently and collectively represent their intended latent constructs before moving on to discriminant validity.
Cornell Notes
Construct reliability and convergent validity come after checking factor loadings and measurement-model fit in SmartPLS. Reliability is assessed with Cronbach’s alpha and composite reliability, using 0.70 as a benchmark for modest reliability (Hair and Bernstein guidelines). Convergent validity is assessed with Average Variance Extracted (AVE), where AVE > 0.50 indicates the latent construct explains more than half of the variance in its indicators. In the example with constructs CC and OP, both constructs meet reliability thresholds (alpha and composite reliability above 0.70) and meet convergent validity (AVE above 0.50). Discriminant validity is treated as the next step in a later session.
What does construct reliability mean in measurement-model terms, and how is it quantified in SmartPLS?
How does convergent validity differ from reliability, and what statistic is used to test it?
What does an AVE value greater than 0.50 imply about the latent construct’s relationship to its indicators?
What is the practical SmartPLS workflow order for measurement-model assessment described here?
In the example with constructs CC and OP, what results were used to claim reliability and convergent validity?
Review Questions
- Why are Cronbach’s alpha and composite reliability both used to assess construct reliability, and what threshold is applied?
- What does AVE measure, and why is AVE > 0.50 considered evidence of convergent validity?
- After checking factor loadings and model fit, what are the next two construct-level validity steps before discriminant validity?
Key Points
- 1
Construct reliability in SmartPLS is assessed using Cronbach’s alpha and composite reliability, with 0.70 used as a benchmark for modest reliability.
- 2
Convergent validity is assessed using Average Variance Extracted (AVE), with AVE > 0.50 indicating the latent construct explains more than half of indicator variance.
- 3
Reliability and convergent validity are checked after factor loadings and measurement-model fit are reviewed.
- 4
Model fit is evaluated using criteria such as significant p-value, a fit measure approaching 90, and chi-square/df in the 2–5 range.
- 5
In the example, both CC and OP meet reliability thresholds (alpha and composite reliability above 0.70) and convergent validity (AVE above 0.50).
- 6
Discriminant validity is required for full construct validity but is handled in a later session.