09. SPSS Classroom - Conceptualizing, Analyzing, Reporting Independent Samples T Test
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Independent-samples t tests compare the means of two independent groups when population means are unknown.
Briefing
Independent-samples t tests are built for one job: comparing the means of two separate groups when the population mean is unknown and can’t be measured directly. Instead of trying to collect data from an entire population, researchers draw random samples from two independent groups—such as male vs. female employees, permanent vs. daily-wage workers, or graduates from two business schools—and test whether the groups’ average outcomes differ. The method is used across practical settings, from economists comparing per-capita income across regions to market researchers evaluating which territory yields higher sales or expenses.
The transcript emphasizes that the two groups must be independent and the outcome being compared (the “dependent variable”) must be measured on a numerical scale—specifically interval or ratio. Independence is treated as the most critical assumption: no observation in one group can appear in the other, because even mild non-independence can distort p-values downward and create false signs of significance. A second key assumption is normality of the dependent variable within each group. While the test is fairly robust to mild departures from normality—especially with large sample sizes—the transcript still outlines how to check normality using skewness and kurtosis ranges.
A worked example in SPSS walks through a study comparing self-efficacy scores between male and female public-sector workers. Self-efficacy is treated as the dependent variable, measured via Likert-style responses (1 to 5) across several items, later combined into a composite score. The null hypothesis sets no difference in average self-efficacy between genders, while the alternate hypothesis allows for a significant difference.
Before running the t test, the transcript explains how to verify assumptions. Independence is ensured by design: male and female groups are mutually exclusive. For normality, skewness and kurtosis values are checked against acceptable ranges; in the example, skewness is around -0.93 and kurtosis around 2.7, both within the stated bounds.
In SPSS, the analysis is performed under Analyze → Compare Means → Independent-Samples T Test, with self-efficacy as the test variable and gender as the grouping variable (coded 1 for male and 2 for female). Output begins with group statistics: sample sizes (260 males, 92 females), means (about 3.52 for males and 3.7 for females), and standard deviations. The decision about statistical significance comes from the independent-samples t test table, which first uses Levene’s test to determine whether equal variances can be assumed. With Levene’s significance reported as below 0.05, equal variances are not assumed, so the “equal variances not assumed” row is used.
The resulting two-tailed p-value is 0.036, which is below 0.05, so the null hypothesis is rejected at the 5% level: male and female self-efficacy differ significantly. The transcript also interprets the mean difference and the 95% confidence interval, noting that significance aligns with the confidence interval not crossing zero. Finally, it discusses one-tailed testing for directional hypotheses, showing how the one-tailed p-value can be derived by halving the two-tailed value (0.036/2 = 0.018), leading to the conclusion that female self-efficacy is higher than male self-efficacy—while also flagging a minor labeling/typing slip in the written interpretation.
Cornell Notes
Independent-samples t tests compare the means of two independent groups when population means are unknown. The method requires independence (no overlap between groups), a numerical outcome measured on interval/ratio scales, and approximate normality within each group (checked via skewness and kurtosis; mild non-normality is often acceptable, especially with large samples). In the SPSS example, self-efficacy scores for male (n=260) and female (n=92) public-sector workers are compared. Levene’s test determines whether to assume equal variances; with Levene’s p < 0.05, the “equal variances not assumed” t-test row is used. The two-tailed p-value (0.036) leads to rejecting the null hypothesis of equal means, and the group means indicate females score higher.
Why does independence matter so much in an independent-samples t test?
How do researchers decide whether to use the “equal variances assumed” or “not assumed” t-test row in SPSS?
What normality checks are used in the example, and what do the values mean?
How is the decision made from the t-test output in the self-efficacy example?
When would a one-tailed test be appropriate, and how is its p-value obtained here?
Review Questions
- What specific conditions must hold for the two groups in an independent-samples t test, and why would violating them distort p-values?
- In SPSS output, how does Levene’s test determine which t-test row to use, and what threshold is used?
- How do you interpret a 95% confidence interval for the mean difference when deciding whether to reject the null hypothesis?
Key Points
- 1
Independent-samples t tests compare the means of two independent groups when population means are unknown.
- 2
Group independence is the most critical assumption; overlap or dependence can make p-values too small.
- 3
The dependent variable should be numerical and measured on interval or ratio scales.
- 4
Normality can be checked using skewness and kurtosis; mild non-normality is often tolerable, especially with large samples.
- 5
In SPSS, Levene’s test (p < 0.05) determines whether equal variances are assumed.
- 6
Significance is judged using the appropriate p-value (two-tailed by default, one-tailed for directional hypotheses).
- 7
A 95% confidence interval for the mean difference that excludes zero aligns with rejecting the null hypothesis.