Get AI summaries of any video or article — Sign up free
07. SPSS Classroom - Analyzing and Reporting One Sample T Test in SPSS thumbnail

07. SPSS Classroom - Analyzing and Reporting One Sample T Test in SPSS

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A one-sample t test compares a single sample mean to a specified benchmark (test value) and is used when population standard deviation isn’t known exactly.

Briefing

A one-sample t test is the go-to method when researchers have a single sample and want to check whether its mean matches a known population benchmark—like a company’s advertised product lifetime or a school’s claimed graduate salary. It matters because it turns a claim about “average equals X” into a statistical decision using a t statistic and a p-value, letting analysts quantify whether observed differences are likely due to chance.

The session frames the one-sample t test as the sample-based counterpart to the z test. Both rely on the same general logic—comparing a sample mean to a standard value—but they differ in assumptions. Z tests typically require known population parameters (mean and standard deviation), while t tests are used when those population values aren’t known exactly. In real-world settings, analysts often only have a benchmark value (for example, a claimed mean) and must estimate variability from the sample, making the t test more practical. Sample size guidance is also given: results from t tests tend to align closely with z tests when sample sizes are around 30–40, so the method doesn’t demand huge datasets.

Three t-test variants are introduced to clarify when each applies: one-sample t test (one sample vs. a population mean), independent-samples t test (two independent groups), and dependent (paired) t test (the same subjects measured twice). The focus then narrows to the one-sample t test, used for questions like whether regional per capita income equals a national average, whether a product’s household penetration meets a target level, or whether manufacturing dimensions have drifted from original specifications.

Validity hinges on assumptions. Independence of observations is treated as the most critical: if observations are linked (for instance, if one person’s outcome influences another’s), p-values can become misleadingly small, increasing the risk of false “significant” findings. The variable should be measured on an interval or ratio (numerical) scale. Normality is also required for fully reliable inference, but the session emphasizes that the t test is fairly robust to mild departures—especially with moderate or large samples.

An SPSS walkthrough grounds these ideas in a concrete example. A business school claims its graduates earn an average of 750,000 rupees per year. A sample of 30 graduates is collected, and the null hypothesis is set as “the population mean equals 750,000.” Before running the test, SPSS is used to check normality via skewness and kurtosis from descriptive statistics; the reported skewness is -0.263 and kurtosis is -0.744, both within commonly cited acceptable ranges. Then SPSS runs the one-sample test under “Analyze → Compare Means → One-Sample T Test,” using 750,000 as the test value.

The output shows a sample mean of 691.584 (with n=30) and a two-tailed p-value reported as 0 (with t = -6.184 and df = 29). With a 5% significance level, p < 0.05 leads to rejection of the null hypothesis. The conclusion is straightforward: the graduates’ average salary is significantly different from the advertised 750,000, and because the sample mean is lower, the claim does not hold.

Cornell Notes

A one-sample t test checks whether the mean of a single sample equals a known benchmark (the population mean value used as the “test value”). It’s preferred over a z test when population standard deviation (and sometimes the population mean) isn’t known, because the t test estimates variability from the sample. Key assumptions include independence of observations, a numerical (interval/ratio) measurement scale, and approximate normality (with the t test being robust to mild non-normality, especially around moderate sample sizes). In the SPSS example, 30 graduates are tested against a claimed mean salary of 750,000 rupees. Skewness (-0.263) and kurtosis (-0.744) support normality, and the one-sample t test yields p < 0.05, so the claim is rejected because the sample mean (691.584) is significantly lower than 750,000.

When should analysts use a one-sample t test instead of a z test?

Use a one-sample t test when comparing one sample mean to a known benchmark while population parameters—especially the population standard deviation—aren’t known exactly. The session contrasts this with z tests, which rely on known population mean and standard deviation. In many real-life cases, analysts can specify a standard value tied to a claim, but cannot compute exact population variability, so the t test is the practical choice.

Why is independence of observations treated as the most important assumption?

If observations aren’t independent, the t test can produce p-values that are too small, increasing the chance of a false positive—declaring a statistically significant difference when none exists. Independence means one observation provides no information about another. The example given is that each graduate’s salary in the sample should be unrelated to other graduates’ salaries.

How does normality get checked in the SPSS workflow described?

Normality is assessed using skewness and kurtosis from SPSS descriptive statistics. The session instructs using Analyze → Descriptive Statistics → Descriptives, then selecting options for skewness and kurtosis. The example reports skewness = -0.263 and kurtosis = -0.744, both within cited acceptable ranges, leading to the assumption of approximate normality.

How are hypotheses set up for the business school salary claim?

The null hypothesis is that the population mean salary equals the advertised benchmark: μ = 750,000 rupees. The alternative is that the mean differs from 750,000 (a two-tailed test). After computing the sample mean and running the t test, a small p-value leads to rejecting the null.

How should the t test result be interpreted beyond just the p-value?

The p-value determines whether the difference is statistically significant, but the direction comes from the sample mean relative to the test value. In the example, the sample mean is 691.584, which is below the claimed 750,000. With p < 0.05, the claim is rejected and the lower mean indicates the advertised figure is overstated.

Review Questions

  1. In what situations would a dependent (paired) t test be more appropriate than a one-sample t test?
  2. What specific failure mode occurs when the independence assumption is violated in a one-sample t test?
  3. In the SPSS example, which normality statistics were used, and what decision followed from their values?

Key Points

  1. 1

    A one-sample t test compares a single sample mean to a specified benchmark (test value) and is used when population standard deviation isn’t known exactly.

  2. 2

    Z tests and t tests share the same comparison logic, but they rely on different assumptions about known population parameters.

  3. 3

    Three t-test types exist—one-sample, independent-samples, and dependent (paired)—and the correct choice depends on study design.

  4. 4

    Independence of observations is the most critical assumption; violating it can make p-values artificially small and inflate false positives.

  5. 5

    The measurement variable should be numerical on an interval or ratio scale.

  6. 6

    Normality can be checked in SPSS using skewness and kurtosis; the t test is robust to mild departures, especially with moderate sample sizes.

  7. 7

    In the salary example, SPSS produced p < 0.05 and a sample mean below the test value, leading to rejection of the business school’s 750,000 rupee claim.

Highlights

The one-sample t test is designed for “mean equals X” claims when only one sample is available and population variability isn’t known.
Independence violations can distort inference by producing p-values that are too small, raising the risk of false significance.
SPSS normality checks used skewness (-0.263) and kurtosis (-0.744) before running the t test.
With n=30, t = -6.184 (df=29) and p reported as 0 led to rejecting the null that the mean salary equals 750,000 rupees.
The conclusion depended on both significance (p < 0.05) and direction (sample mean 691.584 < 750,000).

Topics