Statistics for Research - Lesson 31 - One Way ANOVA - Theory and Practice in SPSS (v29)
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use one-way ANOVA when comparing the means of three or more independent groups on a single continuous dependent variable.
Briefing
One-way ANOVA is the go-to method for testing whether three or more independent groups have meaningfully different mean scores on a single outcome—so long as the outcome is measured on a continuous scale and the groups are comparable. When researchers previously used independent-samples t tests for two groups, the same “compare means” logic expands to analysis of variance (ANOVA) for multiple groups. The core goal is an overall check: do the group means differ enough to suggest they come from different populations, rather than just random sampling noise.
The assumptions drive which ANOVA variant and which follow-up comparisons are valid. The dependent variable must be continuous (interval or ratio). Observations must be independent—no participant can appear in more than one group. The outcome should be normally or near-normally distributed; if normality fails, bootstrapping is flagged as a later remedy. Data should avoid spurious outliers, and the spread across groups must be roughly homogeneous. In SPSS (or R), homogeneity of variance is assessed with Levene’s test. If Levene’s test is not significant, equal-variance ANOVA procedures can be used; if it is significant, equal-variance assumptions break down and Welch’s ANOVA becomes the safer choice.
Group size balance matters because unequal sizes can amplify problems with variance equality and normality. The method is described as a two-stage workflow. Stage one runs an overall hypothesis test across all groups; a significant result triggers stage two, where post hoc comparisons identify which specific group pairs differ. Post hoc tests function like multiple t tests but control the error rate across many comparisons. The transcript distinguishes the statistics: t tests focus on mean differences between two groups, while the F test in ANOVA evaluates whether variability among group means is large relative to within-group variability—capturing differences across multiple groups at once.
A practical SPSS walkthrough illustrates the decision rules. For “vision” communication scores across junior, middle, and senior job ranks, the means rise across groups (junior 4.84, middle 5.05, senior 5.31), but Levene’s test is insignificant and the ANOVA main effect is also non-significant (F ≈ 1.84, p ≈ 0.160). With no significant overall effect, post hoc pairwise tests are unnecessary. The transcript then shows a contrasting scenario using “development initiatives,” where Levene’s test is significant, so Welch’s ANOVA is used. Welch’s test is significant (W ≈ 4.76), and post hoc comparisons proceed with Games–Howell (appropriate for unequal variances). In that example, junior differs significantly from senior, while junior vs. middle and middle vs. senior comparisons are not significant.
Finally, reporting guidance ties results to the correct statistics: when equal variances hold, report the ANOVA F statistic and p value; when they don’t, report Welch’s statistic and p value, then report only the significant pairwise differences from the chosen post hoc test (Tukey for equal variances, Games–Howell or similar for unequal variances). The takeaway is procedural: check assumptions first, choose the correct ANOVA engine, and only run the post hoc comparisons that the assumption structure and overall significance justify.
Cornell Notes
One-way ANOVA tests whether the mean of a continuous dependent variable differs across three or more independent groups. It uses a two-stage process: an overall test first, then post hoc pairwise comparisons only if the overall result is significant. Key assumptions include independence of observations, near-normality of the dependent variable, limited problematic outliers, and homogeneity of variance assessed with Levene’s test. If Levene’s test is not significant, equal-variance ANOVA and Tukey post hoc comparisons are appropriate; if Levene’s test is significant, Welch’s ANOVA and Games–Howell (or similar) post hoc tests are used. Correct reporting depends on which test statistic (F vs. Welch W) and which post hoc procedure were justified by the assumption checks.
Why does one-way ANOVA replace an independent-samples t test when there are three or more groups?
What assumption checks determine whether to use equal-variance ANOVA or Welch’s ANOVA?
What is the two-stage logic for interpreting one-way ANOVA results?
How do t tests and ANOVA’s F test differ in what they measure?
Which post hoc tests are recommended under equal vs. unequal variance conditions in the SPSS workflow described?
How should results be reported when equal variances are violated?
Review Questions
- If Levene’s test is significant, what ANOVA statistic and which post hoc method should be used according to the workflow?
- In a one-way ANOVA with three groups, when is it appropriate to run post hoc pairwise comparisons?
- What information from SPSS output is needed to write a results paragraph for one-way ANOVA (including means/SDs and the correct test statistic)?
Key Points
- 1
Use one-way ANOVA when comparing the means of three or more independent groups on a single continuous dependent variable.
- 2
Check independence of observations (no participant belongs to more than one group) before interpreting any ANOVA results.
- 3
Assess homogeneity of variance with Levene’s test; it determines whether equal-variance ANOVA or Welch’s ANOVA is appropriate.
- 4
Interpret one-way ANOVA in two stages: overall significance first, then post hoc pairwise tests only if the overall test is significant.
- 5
Choose post hoc tests based on variance assumptions: Tukey when equal variances are assumed; Games–Howell when equal variances are not assumed.
- 6
When equal-variance assumptions fail, report Welch’s statistic (not the F from equal-variance ANOVA) and report only the significant pairwise differences with their means and standard deviations.