Get AI summaries of any video or article — Sign up free
#SmartPLS4 Series 7.2 -  Concept and Assessment of Construct Reliability thumbnail

#SmartPLS4 Series 7.2 - Concept and Assessment of Construct Reliability

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Indicator reliability for reflective models is assessed by squaring indicator loadings to get explained variance (communality).

Briefing

Construct reliability in reflective measurement models hinges on two checks: whether indicators reliably reflect their latent construct, and whether the construct’s indicators hang together consistently. The first step is indicator reliability, which is assessed by how much of each indicator’s variance is explained by its construct. In practice, this means squaring the indicator loadings (the loading is the correlation between indicator and construct) to obtain explained variance, or communality. Loadings above 0.708 are the benchmark because they imply the construct explains more than 50% of the indicator’s variance—an acceptable level of indicator reliability.

When an indicator’s loading falls below the threshold, removing it is not automatic. Researchers are advised to examine how deleting the indicator affects both reliability and validity. In social science work—especially with newly developed scales—loadings between 0.4 and 0.708 may be candidates for removal only if the deletion improves internal consistency reliability or convergent validity. Content validity adds another constraint: if an item’s wording or meaning is important to the construct’s coverage, low-loading indicators may need to stay even if they underperform statistically. A hard rule is also given for extremely weak indicators: items with loadings below 0.4 should be eliminated from the measurement model.

After indicator reliability comes internal consistency reliability, which evaluates whether the set of indicators consistently measures the construct. In SmartPLS, the primary metric is composite reliability (often written as CR or ρc). Higher values indicate stronger internal consistency, with values above 0.70 typically considered reliable. The guidance is more nuanced: 0.60–0.70 is acceptable for exploratory research; 0.70–0.90 is satisfactory to good; and values above 0.90—especially above 0.95—can be problematic because they suggest redundancy among indicators. That redundancy can inflate correlations among indicator error terms, a phenomenon linked to “straightlining” response patterns.

Cronbach’s alpha (α) is another internal consistency measure, using similar thresholds to composite reliability. However, alpha assumes tau-equivalence: that all indicator loadings are equal in the population. When that assumption fails, alpha tends to produce lower reliability estimates than composite reliability. Composite reliability is described as more liberal, Cronbach’s alpha more conservative, with the construct’s “true” reliability falling between them. SmartPLS also reports rho_A (ρA), a reliability coefficient positioned as an intermediate compromise between alpha and composite reliability.

In the workflow described, SmartPLS4 output is accessed via the report under “Quality criteria → Construct reliability and validity → Overview.” The example results show rho_A sitting between Cronbach’s alpha and composite reliability, supporting the conclusion that the construct is reliable. The overall takeaway is procedural: verify indicator loadings first (with communality logic and content-validity safeguards), then confirm internal consistency using CR, alpha, and rho_A while watching for signs of redundancy.

Cornell Notes

Reflective construct reliability in SmartPLS4 is assessed in two layers. First, indicator reliability checks whether each indicator’s variance is explained by its latent construct; squaring the loading gives explained variance, and loadings above 0.708 imply the construct explains over 50% of indicator variance. Second, internal consistency reliability evaluates whether indicators work together consistently, mainly using composite reliability (CR), with common thresholds: >0.70 reliable, 0.60–0.70 acceptable in exploratory work, and >0.95 potentially problematic due to redundancy/straightlining. Cronbach’s alpha (α) is conservative because it assumes equal loadings (tau-equivalence). SmartPLS’s rho_A (ρA) provides an intermediate reliability estimate between alpha and CR, and reliability results can be viewed under Quality criteria → Construct reliability and validity → Overview.

How does indicator reliability get computed for reflective measurement models, and why does 0.708 matter?

Indicator reliability is tied to communality: the proportion of an indicator’s variance explained by its latent construct. In practice, the approach uses the indicator loading (the correlation between indicator and construct) and squares it to obtain explained variance. A loading above 0.708 is recommended because 0.708² ≈ 0.50, meaning the construct explains more than 50% of the indicator’s variance—an acceptable indicator reliability level.

If an indicator’s loading is below 0.708, what decision rules guide whether to remove it?

Removal is not automatic. The guidance is to check whether deleting the indicator improves internal consistency reliability and/or convergent validity. Indicators with loadings between 0.4 and 0.708 may be considered for removal only when those improvements occur. Content validity also constrains decisions: if removing an item harms the construct’s coverage, it should be retained even with a low loading. Indicators with loadings below 0.4 should always be eliminated.

What is composite reliability (CR) and how are its thresholds interpreted?

Composite reliability (CR) measures internal consistency reliability for a construct in PLS-SEM. Higher CR indicates stronger internal consistency. Typical thresholds: values over 0.70 are reliable; 0.60–0.70 is acceptable for exploratory research; 0.70–0.90 is satisfactory to good. Values above 0.90—and especially above 0.95—are flagged as problematic because they may indicate redundant indicators and inflated correlations among indicator error terms (linked to straightlining).

Why can Cronbach’s alpha (α) disagree with composite reliability (CR)?

Cronbach’s alpha assumes tau-equivalence, meaning all indicator loadings are equal in the population. When indicator loadings differ, alpha becomes conservative and often yields lower reliability than CR. The transcript notes that CR is more liberal under these conditions, so the “true” reliability is expected to lie between alpha and CR.

What role does rho_A (ρA) play in reliability assessment?

Rho_A (ρA) is positioned as an intermediate reliability coefficient between Cronbach’s alpha and composite reliability. SmartPLS reports rho_A so users can gauge reliability when alpha and CR diverge due to tau-equivalence assumptions. In the example, rho_A falls between alpha and CR, supporting the reliability conclusion.

Review Questions

  1. What does squaring an indicator loading represent in the context of indicator reliability, and how does the 0.708 threshold translate into explained variance?
  2. Under what conditions should an indicator with loading between 0.4 and 0.708 be removed, and how does content validity affect that choice?
  3. Why might Cronbach’s alpha produce lower reliability than composite reliability in reflective models?

Key Points

  1. 1

    Indicator reliability for reflective models is assessed by squaring indicator loadings to get explained variance (communality).

  2. 2

    Loadings above 0.708 indicate acceptable indicator reliability because they imply the construct explains more than 50% of indicator variance.

  3. 3

    Indicators with loadings between 0.4 and 0.708 should be removed only if deletion improves internal consistency reliability and/or convergent validity.

  4. 4

    Content validity can justify retaining low-loading indicators; items below 0.4 should be eliminated.

  5. 5

    Internal consistency reliability is primarily evaluated using composite reliability (CR), with >0.70 generally reliable and >0.95 potentially indicating redundancy/straightlining.

  6. 6

    Cronbach’s alpha is conservative because it assumes tau-equivalence (equal indicator loadings), so it can understate reliability relative to CR.

  7. 7

    SmartPLS’s rho_A (ρA) provides an intermediate reliability estimate between Cronbach’s alpha and composite reliability and can be checked in the report under Quality criteria.

Highlights

Indicator reliability uses communality logic: squaring loadings shows how much of each indicator’s variance the construct explains.
A loading threshold of 0.708 corresponds to explaining about half of an indicator’s variance (0.708² ≈ 0.50).
Composite reliability above 0.95 is treated as a warning sign for redundancy and inflated correlations among error terms.
Cronbach’s alpha can underperform relative to CR because it assumes equal loadings (tau-equivalence).
SmartPLS reports rho_A as a compromise reliability coefficient between alpha and CR.

Topics

Mentioned

  • PLS
  • SCM
  • CR
  • TOA
  • α
  • ρA