Get AI summaries of any video or article — Sign up free
CBSEM using #SmartPLS4 | 9 | Understand and Interpret Construct Reliability and Convergent Validity thumbnail

CBSEM using #SmartPLS4 | 9 | Understand and Interpret Construct Reliability and Convergent Validity

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Construct reliability in SmartPLS is assessed using Cronbach’s alpha and composite reliability, with 0.70 used as a benchmark for modest reliability.

Briefing

Reliability and convergent validity are the next checkpoint after factor loadings and model fit in a SmartPLS measurement model. Construct reliability asks whether the indicators for a latent construct behave consistently—producing stable results across occasions for the same construct—while convergent validity checks whether multiple items that are supposed to measure the same concept actually move together.

Construct reliability is typically assessed using two statistics: composite reliability and Cronbach’s alpha. Both are interpreted against commonly used benchmarks from Hair and Bernstein, with 0.70 treated as a threshold for “modest reliability.” In practice, once the measurement model’s fit and loadings look acceptable, the analysis shifts to these reliability coefficients to confirm that the set of items for each construct is internally consistent. If composite reliability and Cronbach’s alpha both clear the 0.70 benchmark, the construct can be treated as reliable.

Convergent validity then evaluates whether the items for a construct truly converge on the underlying latent variable. The transcript frames this as a degree of agreement among theoretically related indicators—for example, multiple items intended to measure job satisfaction should be statistically related to one another because they reflect the same latent concept. In SmartPLS, convergent validity is assessed using the Average Variance Extracted (AVE). AVE represents how much of the variance in the observed indicators can be explained by the latent variable; an AVE greater than 0.50 is used as empirical evidence that the construct captures more than half of the indicators’ variance, supporting convergent validity.

In the SmartPLS workflow described, the process begins with running a basic CB-SEM/PLS algorithm (the transcript mentions “calculate basic CVM algorithm,” followed by a measurement-model check). Fit statistics are reviewed first: the p-value is expected to be significant for the sample size, the model fit is described as approaching 90 (satisfactory), and the chi-square divided by degrees of freedom is expected to fall in a reasonable range—between 2 and 5.

After confirming that the model fit and loadings are acceptable (with a note that some loadings may be problematic), the reliability and convergent validity results are checked for two constructs labeled CC and OP. For both constructs, Cronbach’s alpha is reported as above 0.70, and composite reliability is also above 0.70. AVE is reported as above 0.50 as well, leading to the conclusion that both constructs are reliable and show convergent validity.

Discriminant validity is flagged as the remaining requirement for full construct validity, but it is deferred to the next session. The practical takeaway is clear: in SmartPLS measurement-model assessment, reliability (Cronbach’s alpha and composite reliability) and convergent validity (AVE) provide the evidence that the indicators consistently and collectively represent their intended latent constructs before moving on to discriminant validity.

Cornell Notes

Construct reliability and convergent validity come after checking factor loadings and measurement-model fit in SmartPLS. Reliability is assessed with Cronbach’s alpha and composite reliability, using 0.70 as a benchmark for modest reliability (Hair and Bernstein guidelines). Convergent validity is assessed with Average Variance Extracted (AVE), where AVE > 0.50 indicates the latent construct explains more than half of the variance in its indicators. In the example with constructs CC and OP, both constructs meet reliability thresholds (alpha and composite reliability above 0.70) and meet convergent validity (AVE above 0.50). Discriminant validity is treated as the next step in a later session.

What does construct reliability mean in measurement-model terms, and how is it quantified in SmartPLS?

Construct reliability reflects whether a set of indicators measures a construct consistently—capturing the “consistency” idea behind reliability. In SmartPLS, it’s assessed using two statistics: Cronbach’s alpha and composite reliability. The transcript uses Hair and Bernstein-style guidance, treating 0.70 as a benchmark for modest reliability. If both coefficients exceed 0.70, the construct is considered reliable.

How does convergent validity differ from reliability, and what statistic is used to test it?

Reliability focuses on internal consistency of indicators, while convergent validity checks whether multiple indicators that should measure the same concept actually relate to one another through the latent variable. The transcript defines convergent validity as the degree to which theoretically related items “converge” in practice. In SmartPLS, this is tested with Average Variance Extracted (AVE), which measures how much variance in the indicators is explained by the latent construct. AVE > 0.50 is used as evidence of convergent validity.

What does an AVE value greater than 0.50 imply about the latent construct’s relationship to its indicators?

An AVE above 0.50 implies that the latent variable explains more than half of the variance in its observed indicators. In the transcript’s example framing (e.g., job satisfaction measured by multiple items), this means the indicators share enough common variance attributable to the underlying construct rather than being mostly noise or unrelated measurement.

What is the practical SmartPLS workflow order for measurement-model assessment described here?

The transcript follows a sequence: first check measurement-model fit and factor loadings, then move to construct reliability and convergent validity. Fit is assessed using criteria such as significant p-value, a fit measure approaching 90, and chi-square/df in a reasonable range (2 to 5). After that, Cronbach’s alpha and composite reliability are checked for reliability, and AVE is checked for convergent validity. Discriminant validity is explicitly deferred to a later lecture.

In the example with constructs CC and OP, what results were used to claim reliability and convergent validity?

For both CC and OP, Cronbach’s alpha is reported as above 0.70, and composite reliability is also above 0.70. Convergent validity is supported because AVE is reported as above 0.50. With these thresholds met, both constructs are treated as reliable and convergently valid in the measurement model.

Review Questions

  1. Why are Cronbach’s alpha and composite reliability both used to assess construct reliability, and what threshold is applied?
  2. What does AVE measure, and why is AVE > 0.50 considered evidence of convergent validity?
  3. After checking factor loadings and model fit, what are the next two construct-level validity steps before discriminant validity?

Key Points

  1. 1

    Construct reliability in SmartPLS is assessed using Cronbach’s alpha and composite reliability, with 0.70 used as a benchmark for modest reliability.

  2. 2

    Convergent validity is assessed using Average Variance Extracted (AVE), with AVE > 0.50 indicating the latent construct explains more than half of indicator variance.

  3. 3

    Reliability and convergent validity are checked after factor loadings and measurement-model fit are reviewed.

  4. 4

    Model fit is evaluated using criteria such as significant p-value, a fit measure approaching 90, and chi-square/df in the 2–5 range.

  5. 5

    In the example, both CC and OP meet reliability thresholds (alpha and composite reliability above 0.70) and convergent validity (AVE above 0.50).

  6. 6

    Discriminant validity is required for full construct validity but is handled in a later session.

Highlights

Reliability and convergent validity are the measurement-model “next step” after loadings and fit: alpha/composite reliability first, then AVE.
AVE is treated as the core convergent-validity statistic because it quantifies how much variance in indicators is explained by the latent construct.
Hair and Bernstein-style thresholds drive decisions: 0.70 for reliability and 0.50 for AVE.
The CC and OP example clears both reliability and convergent validity thresholds, setting up the remaining discriminant-validity check for later.

Mentioned

  • Hair
  • Bernstein
  • CB-SEM
  • PLS
  • AVE