Get AI summaries of any video or article — Sign up free
11. SEM | SPSS AMOS - How to Establish Composite Reliability and Convergent Validity thumbnail

11. SEM | SPSS AMOS - How to Establish Composite Reliability and Convergent Validity

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Check reliability and validity only after factor loadings and overall model fit have been assessed.

Briefing

Composite reliability and convergent validity come after factor loadings and model fit are checked, and they determine whether each latent construct is measured consistently and whether its indicators truly “belong together.” Construct reliability is about consistency: a measure is reliable when it produces the same results under the same conditions across occasions. In structural equation modeling, construct reliability is commonly assessed using composite reliability (often preferred in SEM) and Cronbach’s alpha. For this workflow, composite reliability is calculated from standardized factor loadings and error variances, using a formula that effectively rewards strong loadings and penalizes measurement error. The benchmark referenced is 0.7 (from Nunnally and Bernstein), where values above 0.7 indicate acceptable or modest reliability.

Composite reliability is computed per construct, not across all constructs at once. The process in AMOS starts by running the model and extracting standardized regression weights (the factor loadings) from the output estimates table. Each indicator’s standardized loading is used to compute error variance as (1 − loading), and the composite reliability is then calculated for each latent variable separately. In the example, composite reliability values for constructs such as Authentic leadership behavior, Ethical behavior, and Life satisfaction all exceed 0.7, which is treated as evidence that the measures are consistent enough to yield stable results.

Once reliability is established, the next step is construct validity, which asks whether the chosen items actually measure the intended latent construct. Construct validity is assessed through convergent validity and discriminant validity; this session focuses on convergent validity. Convergent validity checks whether multiple indicators that are supposed to measure the same construct are indeed related—meaning the indicators should converge to represent the underlying latent variable. Practically, this is linked to unidimensionality: indicators of the same construct should correlate significantly and not behave like unrelated measures.

Convergent validity is assessed using Average Variance Extracted (AVE). AVE reflects how much of the indicators’ variance is explained by the latent construct; an AVE greater than 0.50 is used as empirical evidence of convergent validity, because the latent variable accounts for more than half of the variance in its indicators. AVE is calculated by squaring each factor loading, summing those squared loadings, and dividing by the number of items for that latent variable.

In AMOS, the same factor loadings used for composite reliability feed into the AVE calculation. The example produces AVE values for Authentic leadership behavior, Ethical behavior, and Life satisfaction that are interpreted as acceptable (with values around 0.47–0.67 discussed), and the workflow notes that if an indicator’s AVE-related contribution is borderline, deletion can be considered—especially when composite reliability for the construct is above 0.7. With both composite reliability and convergent validity in place, the analysis is positioned to move next toward discriminant validity, which tests whether constructs are distinct from one another.

Cornell Notes

Composite reliability and convergent validity are the next checks after factor loadings and model fit. Composite reliability measures consistency of a latent construct’s indicators and is computed in AMOS from standardized factor loadings and error variances; values above 0.7 (Nunnally & Bernstein) indicate acceptable reliability per construct. Construct validity then asks whether items truly measure the intended latent variable, with convergent validity focusing on whether indicators of the same construct “converge.” Convergent validity is assessed using AVE (Average Variance Extracted), calculated from squared factor loadings divided by the number of indicators; AVE above 0.50 supports convergent validity. The workflow uses the same AMOS standardized regression weights for both calculations and treats results above thresholds as evidence that constructs are reliable and valid.

How does construct reliability differ from general reliability, and how is it calculated in SEM/AMOS?

Reliability is about consistency: a measure should produce the same results under the same conditions across occasions. Construct reliability applies that idea to latent constructs measured by multiple indicators. In SEM, composite reliability is calculated using standardized factor loadings and error variances derived from those loadings. The method uses each indicator’s standardized loading (Λ) and estimates error variance as (1 − Λ). Composite reliability is then computed per latent construct from the summed loadings and the summed error variances, with interpretation benchmarked at 0.7 for acceptable/modest reliability.

Why must composite reliability be computed separately for each construct rather than across all constructs?

Composite reliability is tied to a specific latent variable and its set of indicators. Mixing indicators from different constructs would produce a single reliability value that does not reflect the consistency of any one construct’s measurement model. The workflow explicitly instructs calculating composite reliability for each construct (e.g., Authentic leadership behavior, Ethical behavior, Life satisfaction) using only the factor loadings belonging to that construct.

Where do the factor loadings needed for composite reliability and AVE come from in AMOS?

After running the SEM model in AMOS, the standardized regression weights (standardized factor loadings) are taken from the output under estimates. Those standardized regression weights are copied into a calculator or spreadsheet. The same loading values then feed both composite reliability and AVE calculations, ensuring the reliability and validity computations align with the model’s estimated measurement relationships.

What does convergent validity test, and what threshold is used to judge it?

Convergent validity tests whether indicators that are supposed to measure the same latent construct are actually related and converge to represent the underlying variable. The session links this to unidimensionality: indicators of the same construct should correlate significantly and not act as unrelated measures. Convergent validity is assessed using AVE, and an AVE greater than 0.50 is treated as evidence that the latent construct explains more than half of the variance in its indicators.

How is AVE (Average Variance Extracted) computed from factor loadings?

AVE is computed by squaring each factor loading, summing those squared values, and dividing by the number of indicators for the latent construct. In formula form: AVE = (Σ(Λ^2)) / n, where Λ represents the factor loading and n is the number of items/indicators for that latent variable.

When might an indicator be deleted in this workflow, and what reference point is mentioned?

The workflow notes that if an AVE-related value is close to the 0.50 threshold, deletion can be considered. A specific reference is given: if the composite reliability for that particular latent variable is greater than 0.7, deletion may not be necessary even when AVE is borderline. The example mentions AVE values around the threshold and treats the overall construct as acceptable when reliability is strong.

Review Questions

  1. What inputs from AMOS are required to compute composite reliability and AVE, and how are they obtained from the output?
  2. Explain why AVE is calculated using squared factor loadings and how the 0.50 threshold is interpreted.
  3. What is the conceptual difference between construct reliability and convergent validity, and how does each one use factor loadings?

Key Points

  1. 1

    Check reliability and validity only after factor loadings and overall model fit have been assessed.

  2. 2

    Compute composite reliability per latent construct using standardized factor loadings and error variance estimated as (1 − loading).

  3. 3

    Use the 0.7 benchmark (Nunnally & Bernstein) to interpret composite reliability as acceptable/modest reliability.

  4. 4

    Establish construct validity by testing convergent validity (and later discriminant validity) to confirm items measure the intended construct.

  5. 5

    Assess convergent validity with AVE, calculated as the sum of squared factor loadings divided by the number of indicators; interpret AVE > 0.50 as evidence of convergent validity.

  6. 6

    If AVE is borderline, consider indicator deletion, but the workflow notes that strong composite reliability (> 0.7) can reduce the need to delete indicators.

Highlights

Composite reliability in AMOS is built directly from standardized factor loadings: error variance is estimated as (1 − loading), and reliability is computed per construct.
Convergent validity is operationalized through AVE, where squared loadings quantify how much of indicator variance the latent construct explains.
A practical rule of thumb is used throughout: composite reliability above 0.7 and AVE above 0.50 are treated as acceptable evidence of measurement quality.
The same standardized regression weights from AMOS feed both reliability (CR) and convergent validity (AVE), keeping the calculations consistent with the estimated measurement model.

Mentioned

  • Nunnally
  • Bernstein
  • Fawad
  • SPSS
  • AMOS
  • CR
  • AVE