#SmartPLS4 Series 7.2 - Concept and Assessment of Construct Reliability
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Indicator reliability for reflective models is assessed by squaring indicator loadings to get explained variance (communality).
Briefing
Construct reliability in reflective measurement models hinges on two checks: whether indicators reliably reflect their latent construct, and whether the construct’s indicators hang together consistently. The first step is indicator reliability, which is assessed by how much of each indicator’s variance is explained by its construct. In practice, this means squaring the indicator loadings (the loading is the correlation between indicator and construct) to obtain explained variance, or communality. Loadings above 0.708 are the benchmark because they imply the construct explains more than 50% of the indicator’s variance—an acceptable level of indicator reliability.
When an indicator’s loading falls below the threshold, removing it is not automatic. Researchers are advised to examine how deleting the indicator affects both reliability and validity. In social science work—especially with newly developed scales—loadings between 0.4 and 0.708 may be candidates for removal only if the deletion improves internal consistency reliability or convergent validity. Content validity adds another constraint: if an item’s wording or meaning is important to the construct’s coverage, low-loading indicators may need to stay even if they underperform statistically. A hard rule is also given for extremely weak indicators: items with loadings below 0.4 should be eliminated from the measurement model.
After indicator reliability comes internal consistency reliability, which evaluates whether the set of indicators consistently measures the construct. In SmartPLS, the primary metric is composite reliability (often written as CR or ρc). Higher values indicate stronger internal consistency, with values above 0.70 typically considered reliable. The guidance is more nuanced: 0.60–0.70 is acceptable for exploratory research; 0.70–0.90 is satisfactory to good; and values above 0.90—especially above 0.95—can be problematic because they suggest redundancy among indicators. That redundancy can inflate correlations among indicator error terms, a phenomenon linked to “straightlining” response patterns.
Cronbach’s alpha (α) is another internal consistency measure, using similar thresholds to composite reliability. However, alpha assumes tau-equivalence: that all indicator loadings are equal in the population. When that assumption fails, alpha tends to produce lower reliability estimates than composite reliability. Composite reliability is described as more liberal, Cronbach’s alpha more conservative, with the construct’s “true” reliability falling between them. SmartPLS also reports rho_A (ρA), a reliability coefficient positioned as an intermediate compromise between alpha and composite reliability.
In the workflow described, SmartPLS4 output is accessed via the report under “Quality criteria → Construct reliability and validity → Overview.” The example results show rho_A sitting between Cronbach’s alpha and composite reliability, supporting the conclusion that the construct is reliable. The overall takeaway is procedural: verify indicator loadings first (with communality logic and content-validity safeguards), then confirm internal consistency using CR, alpha, and rho_A while watching for signs of redundancy.
Cornell Notes
Reflective construct reliability in SmartPLS4 is assessed in two layers. First, indicator reliability checks whether each indicator’s variance is explained by its latent construct; squaring the loading gives explained variance, and loadings above 0.708 imply the construct explains over 50% of indicator variance. Second, internal consistency reliability evaluates whether indicators work together consistently, mainly using composite reliability (CR), with common thresholds: >0.70 reliable, 0.60–0.70 acceptable in exploratory work, and >0.95 potentially problematic due to redundancy/straightlining. Cronbach’s alpha (α) is conservative because it assumes equal loadings (tau-equivalence). SmartPLS’s rho_A (ρA) provides an intermediate reliability estimate between alpha and CR, and reliability results can be viewed under Quality criteria → Construct reliability and validity → Overview.
How does indicator reliability get computed for reflective measurement models, and why does 0.708 matter?
If an indicator’s loading is below 0.708, what decision rules guide whether to remove it?
What is composite reliability (CR) and how are its thresholds interpreted?
Why can Cronbach’s alpha (α) disagree with composite reliability (CR)?
What role does rho_A (ρA) play in reliability assessment?
Review Questions
- What does squaring an indicator loading represent in the context of indicator reliability, and how does the 0.708 threshold translate into explained variance?
- Under what conditions should an indicator with loading between 0.4 and 0.708 be removed, and how does content validity affect that choice?
- Why might Cronbach’s alpha produce lower reliability than composite reliability in reflective models?
Key Points
- 1
Indicator reliability for reflective models is assessed by squaring indicator loadings to get explained variance (communality).
- 2
Loadings above 0.708 indicate acceptable indicator reliability because they imply the construct explains more than 50% of indicator variance.
- 3
Indicators with loadings between 0.4 and 0.708 should be removed only if deletion improves internal consistency reliability and/or convergent validity.
- 4
Content validity can justify retaining low-loading indicators; items below 0.4 should be eliminated.
- 5
Internal consistency reliability is primarily evaluated using composite reliability (CR), with >0.70 generally reliable and >0.95 potentially indicating redundancy/straightlining.
- 6
Cronbach’s alpha is conservative because it assumes tau-equivalence (equal indicator loadings), so it can understate reliability relative to CR.
- 7
SmartPLS’s rho_A (ρA) provides an intermediate reliability estimate between Cronbach’s alpha and composite reliability and can be checked in the report under Quality criteria.