Get AI summaries of any video or article — Sign up free
Bootstrap One Way Analysis of Variance (ANOVA) using SPSS thumbnail

Bootstrap One Way Analysis of Variance (ANOVA) using SPSS

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Run one-way ANOVA in SPSS with a dependent variable (e.g., collaborative culture) and a categorical factor (e.g., job rank).

Briefing

Bootstrap one-way ANOVA in SPSS is presented as a practical workaround when group data violate normality, letting researchers still test whether perceptions differ across categories like job ranks. The core workflow starts with a standard one-way ANOVA setup—Analyze → Compare Means → One-Way ANOVA—using a dependent variable such as “collaborative culture” and a grouping factor such as job rank (junior, middle, senior). Before choosing the main ANOVA test, the analysis checks homogeneity of variance via the homogeneity of variance test. If that test is insignificant, equal variances are treated as reasonable, and the analysis proceeds with the usual ANOVA main-effects table (and, if needed, post-hoc comparisons).

In the example, homogeneity of variance is insignificant, so equal variance is assumed. The main-effects ANOVA result is reported as significant but only slightly above the 0.05 threshold, leading to the conclusion that collaborative culture perceptions may differ across at least some job ranks. The effect size is described as very small, signaling that even when differences exist, they are not large.

The session then switches to bootstrapping to make the inference more robust when the data are not normal. In SPSS’s One-Way ANOVA dialog, the user selects the bootstrap option (bootstrap perform bootstrapping) and specifies a resampling count—typically 5,000 to 10,000—using 10,500 samples in this walkthrough. A bias-corrected confidence interval accelerated (BCa) approach is chosen to stabilize interval estimates. For multiple comparisons, the method is aligned with the variance assumption: when equal variance is assumed, Tukey is used; when equal variance is not assumed, Games-Howell is used.

After bootstrapping, the homogeneity-of-variance check is repeated and again remains insignificant, so the main-effects ANOVA is used with the bootstrap framework. The multiple-comparisons results are where bootstrapping changes the decision logic: the confidence intervals for pairwise mean differences are examined to see whether they cross zero. In the example, the bootstrapped multiple comparisons indicate that junior and senior differ (pair 1 vs 3), while other pairwise contrasts do not show clear separation based on whether zero lies within the confidence interval. The mean differences remain similar, but the bootstrap-adjusted confidence interval bounds determine which contrasts are treated as statistically significant.

Finally, the transcript addresses the alternative scenario: if homogeneity of variance is significant (variance inequality), equal-variance ANOVA is no longer appropriate. In that case, the analysis uses Welch’s test for the main effect and switches post-hoc comparisons to Games-Howell under the “equal variance not assumed” condition. Bootstrapped multiple comparisons then identify which group pairs differ under the more conservative variance-robust approach. Overall, the method provides a clear decision tree in SPSS: bootstrap for non-normality, then choose ANOVA vs Welch and Tukey vs Games-Howell based on homogeneity of variance results, using bootstrapped confidence intervals to judge pairwise differences.

Cornell Notes

The walkthrough shows how to run one-way ANOVA with bootstrapping in SPSS when group data are not normal. The process begins with Analyze → Compare Means → One-Way ANOVA, selecting the dependent variable (e.g., “collaborative culture”) and the grouping factor (e.g., job rank: junior, middle, senior). After checking homogeneity of variance, equal variances lead to standard ANOVA and Tukey-style multiple comparisons; variance inequality leads to Welch’s test and Games-Howell. Bootstrapping is configured with 10,500 resamples and bias-corrected accelerated (BCa) 95% confidence intervals, and pairwise differences are judged by whether bootstrapped confidence intervals include zero. This makes inference more robust to non-normality while keeping variance assumptions aligned with the chosen test.

Why check homogeneity of variance before interpreting ANOVA results, and how does it change the test choice in SPSS?

Homogeneity of variance determines whether equal-variance assumptions are reasonable. When the homogeneity of variance test is insignificant, equal variance is assumed, so the analysis uses the standard ANOVA main-effects table. If the homogeneity test is significant (variance inequality), equal-variance ANOVA is no longer appropriate; the workflow switches to Welch’s test for the main effects and uses post-hoc comparisons that do not assume equal variances.

How does bootstrapping help when the data are not normal in one-way ANOVA?

Bootstrapping replaces reliance on normal-theory assumptions by repeatedly resampling the data and building empirical confidence intervals for effects. In the walkthrough, bootstrapping is turned on in the One-Way ANOVA dialog with 10,500 samples and BCa (bias-corrected accelerated) confidence intervals at 95%. This yields more robust inference for main effects and pairwise comparisons under non-normality.

What role do bootstrapped confidence intervals play in deciding which group pairs differ?

Pairwise multiple comparisons are interpreted by checking whether the bootstrapped confidence interval for the mean difference includes zero. If zero is not between the lower and upper bounds (e.g., both bounds are negative), the difference is treated as statistically significant. The transcript notes that mean differences can look similar across outputs, but the bootstrap confidence interval bounds determine significance.

When equal variance is assumed, which multiple-comparison method is used, and what changes when it is not assumed?

With equal variance assumed (homogeneity test insignificant), the workflow uses Tukey for multiple comparisons. When equal variance is not assumed (homogeneity test significant), it switches to Games-Howell for multiple comparisons, including in the bootstrapped multiple-comparison setup.

What is the practical decision rule for switching from ANOVA to Welch’s test?

If the homogeneity of variance test is significant, the equal-variance ANOVA table should not be used for the main effect. Instead, Welch’s test is selected so the main-effects inference remains valid under unequal variances.

Review Questions

  1. In SPSS one-way ANOVA with bootstrapping, what specific homogeneity-of-variance outcome triggers using Welch’s test instead of the standard ANOVA table?
  2. How do you determine whether a pairwise group difference is significant using bootstrapped multiple comparisons?
  3. Which multiple-comparison method corresponds to equal variances assumed versus equal variances not assumed (and how does bootstrapping fit into that choice)?

Key Points

  1. 1

    Run one-way ANOVA in SPSS with a dependent variable (e.g., collaborative culture) and a categorical factor (e.g., job rank).

  2. 2

    Check homogeneity of variance first; insignificant results support equal-variance assumptions, while significant results require variance-robust alternatives.

  3. 3

    Enable bootstrapping when data are not normal, using a large resample count (the walkthrough uses 10,500) and BCa 95% confidence intervals.

  4. 4

    Interpret bootstrapped pairwise comparisons by whether the confidence interval for the mean difference crosses zero.

  5. 5

    Use Tukey for multiple comparisons when equal variance is assumed; use Games-Howell when equal variance is not assumed.

  6. 6

    If homogeneity of variance is violated, use Welch’s test for the main effect rather than the standard ANOVA table.

  7. 7

    Pairwise conclusions can change under bootstrapping because confidence interval bounds—not just raw mean differences—determine significance.

Highlights

Bootstrapping in SPSS one-way ANOVA is configured with 10,500 resamples and bias-corrected accelerated (BCa) 95% confidence intervals to handle non-normal data.
Bootstrapped multiple comparisons hinge on whether the confidence interval for each pairwise mean difference includes zero.
A variance check drives the test selection: equal variances → standard ANOVA + Tukey; unequal variances → Welch’s test + Games-Howell.
Even when mean differences look similar, bootstrapped confidence interval bounds determine which group pairs are statistically different.

Topics