LESSON 25 - ALPHA COEFFICIENT RELIABILITY: DETERMINING ALPHA COEFFICIENT RELIABILITY USING SPSS
Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Internal consistency reliability assesses whether items within a single instrument are homogeneous and consistent with each other.
Briefing
Internal consistency reliability hinges on whether items in a single instrument move together. For quantitative measures administered once, researchers typically rely on alpha coefficient methods—most notably Cronbach’s alpha for Likert-type (agree/disagree) or other “no right/wrong” item formats—and use cutoffs to judge whether the scale is dependable. A reliability coefficient of 0.7 or higher is generally considered acceptable, with interpretation for Cronbach’s alpha ranging from weak to strong: values above 0.9 indicate high internal consistency, 0.7–0.9 indicates acceptable-to-good consistency, 0.6–0.7 is acceptable but weaker, 0.5–0.6 is weak, and below 0.5 suggests no meaningful internal consistency.
The lesson also lays out practical ways to raise alpha reliability. Increasing the sample size, adding more items to the instrument, and piloting the instrument can improve the coefficient—but piloting is framed as an improvement step rather than a reliability test in itself. Reliability is treated as distinct from validity: piloting may help reliability, yet it does not establish validity.
To demonstrate the process, the class walks through running Cronbach’s alpha in SPSS. After entering scale items in the Data Editor, the workflow is: Analyze → Scale → Reliability Analysis, then selecting “Alpha” and choosing the appropriate statistic calculation (correlation-based). The output produces three key tables. One table reports the number of items used; another reports Cronbach’s alpha value; and a third provides the inter-item correlation matrix, showing how each item correlates with every other item. In the worked example, 18 items were used and Cronbach’s alpha came out to 0.827, which falls in the “high/strong” reliability range and supports confidence in using the questionnaire with respondents.
Beyond Cronbach’s alpha, the lesson introduces a second alpha coefficient family member: the Kuder–Richardson 20 and 21 tests (KR-20/21). These are positioned as more appropriate for knowledge or achievement questions where responses can be scored as correct versus incorrect. KR-20 is highlighted as the most frequently applied homogeneity index and is described as relying on the ratio of correct and incorrect answers across items. A key constraint is emphasized: KR-20/21 is valid when items are effectively divided into two categories (e.g., correct/incorrect) and when items carry equal weight. If items have different weights, the method should not be used.
Overall, the session connects theory to practice: internal consistency reliability is assessed through alpha-based methods in SPSS for single-administration instruments, with Cronbach’s alpha suited to Likert-style scales and KR-20/21 suited to dichotomously scored knowledge tests. The next step in the broader course is shifting from reliability to validity for qualitative instruments.
Cornell Notes
Internal consistency reliability measures how homogeneous items are within a single instrument administered once. Cronbach’s alpha is the go-to coefficient for Likert-type scales (agree/disagree or other “no right/wrong” items), with common interpretation: >0.9 high, 0.7–0.9 acceptable, 0.6–0.7 acceptable but weaker, 0.5–0.6 weak, and <0.5 no internal consistency. In SPSS, reliability analysis is run via Analyze → Scale → Reliability Analysis, selecting Alpha and using the output tables for the number of items, the alpha value, and the inter-item correlation matrix. The lesson also introduces KR-20/21 for knowledge tests scored as correct/incorrect, noting it requires two-category items and equal item weighting.
Why is Cronbach’s alpha the preferred reliability test for Likert-type instruments?
How should a researcher interpret Cronbach’s alpha values?
What steps in SPSS produce Cronbach’s alpha, and what do the output tables mean?
How can Cronbach’s alpha be improved, and what is the role of piloting?
When should KR-20/21 be used instead of Cronbach’s alpha?
Review Questions
- What Cronbach’s alpha range would you use to justify that a Likert scale has acceptable internal consistency, and why?
- In SPSS reliability output, which table helps you judge whether items correlate consistently with one another, and what does it display?
- What conditions must be met for KR-20/21 to be valid, and how do those conditions differ from Cronbach’s alpha requirements?
Key Points
- 1
Internal consistency reliability assesses whether items within a single instrument are homogeneous and consistent with each other.
- 2
Cronbach’s alpha is best suited for Likert-type scales and other items without right/wrong scoring.
- 3
A Cronbach’s alpha of 0.7 or higher is generally treated as acceptable, with >0.9 indicating high internal consistency.
- 4
Cronbach’s alpha can be improved by increasing sample size, adding more items, and piloting the instrument (piloting improves reliability but does not determine it).
- 5
In SPSS, Cronbach’s alpha is obtained through Analyze → Scale → Reliability Analysis, and interpretation relies on the alpha value plus the inter-item correlation matrix.
- 6
KR-20/21 is intended for knowledge tests scored as correct/incorrect and requires two-category items with equal item weighting.