Get AI summaries of any video or article — Sign up free
#Regression Analysis using SPSS: How to Run, Interpret, and Report the Regression Results in SPSS thumbnail

#Regression Analysis using SPSS: How to Run, Interpret, and Report the Regression Results in SPSS

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Regression analysis quantifies how much variance in a dependent variable is explained by one or more independent variables.

Briefing

Regression analysis is used to measure how strongly one dependent variable relates to one or more independent variables—and to quantify how much variance in the dependent variable can be explained. The transcript distinguishes two common setups: bivariate regression, which involves two variables (one dependent, one independent), and multiple regression, which includes three or more variables (one dependent plus multiple independents). In both cases, the practical goal is the same: test whether a predictor has a statistically significant impact and report the results in a clear, standard format.

The example centers on life satisfaction as the dependent variable and servant leadership as the independent variable. The workflow in SPSS starts by navigating to Analyze → Regression → Linear, then selecting life satisfaction as the dependent variable and servant leadership as the independent variable. After running the model, the output is interpreted through several key tables. The Model Summary section provides R, R Square, and Adjusted R Square. In the bivariate example, R Square is reported as 0.276, meaning 27.6% of the variation in life satisfaction is accounted for by servant leadership. Whether that explained variance is meaningful is tested using the ANOVA table: the regression row’s significance value is effectively 0 (reported as less than 0.01), indicating the overall regression model is statistically significant.

To interpret the direction and strength of the relationship, the Coefficients table is used. With only one independent variable, the standardized beta is treated as closely aligned with the correlation-like relationship, and the t statistic is compared against a typical cutoff (1.96 for a two-tailed test at the 0.05 level). The transcript notes a t value of 9.143, which exceeds 1.96, supporting the conclusion that servant leadership has a significant positive effect on life satisfaction. For reporting, the transcript recommends copying key statistics into a results table: the regression weight (unstandardized beta coefficient), beta, R Square, F statistics, and the p value.

The process then expands to multiple regression by adding additional independent variables (three predictors in the example). Running Analyze → Regression → Linear again with all predictors increases the model’s explanatory power: the F value rises and R Square increases to 0.581, implying 58.1% of life satisfaction variance is explained by the set of predictors. The transcript emphasizes a distinction between overall model significance and individual predictor significance. The ANOVA significance indicates the model as a whole is significant, but determining which predictors matter requires examining each predictor’s coefficients—especially the t values and p values from the coefficients table. Finally, the same reporting logic applies: copy the relevant coefficients and model statistics into the hypothesis-specific results format (H1, H2, H3, and so on), using the appropriate F and p values for each regression run. The end result is a repeatable SPSS routine for running, interpreting, and writing up regression findings.

Cornell Notes

The transcript lays out a practical SPSS workflow for regression analysis, starting with bivariate regression (one independent variable) and extending to multiple regression (several independent variables). In the example, life satisfaction is the dependent variable and servant leadership is the predictor. For bivariate regression, R Square is 0.276, meaning 27.6% of variance in life satisfaction is explained, and ANOVA significance is reported as 0 (<0.01), indicating a statistically significant model. The Coefficients table is then used to confirm significance of the predictor via t (9.143 > 1.96) and to report the unstandardized beta and p value. For multiple regression, the model’s R Square increases (0.581), and overall significance is checked with ANOVA, while individual predictor significance is checked using each predictor’s t and p values.

What is the difference between bivariate regression and multiple regression, and when should each be used?

Bivariate regression is designed for situations with exactly two variables: one dependent variable and one independent variable. Multiple regression is used when there are three or more variables, with one dependent variable and multiple independent variables. The transcript frames this as a shift from testing one predictor’s impact to testing a set of predictors’ combined and individual effects.

How does SPSS output determine whether servant leadership significantly predicts life satisfaction in the bivariate example?

First, Model Summary reports R Square = 0.276, which translates to 27.6% of the variance in life satisfaction explained by servant leadership. Next, the ANOVA table’s regression row provides the model significance; the significance value is reported as 0 (less than 0.01), indicating the regression model is statistically significant. Finally, the Coefficients table confirms the predictor’s significance using the t statistic: t = 9.143, which is greater than 1.96, supporting a significant effect.

Which statistics should be included when reporting bivariate regression results in a hypothesis table?

The transcript recommends copying: the regression weight (unstandardized beta coefficient), beta, R Square, F statistics, and the p value. In the example, the unstandardized beta for servant leadership is 0.579, R Square is 0.276, and the p value is taken from the ANOVA significance (reported as <0.01). The F statistic is also included (shown as 83.5 in the example context).

In multiple regression, why isn’t ANOVA significance enough to claim every predictor is significant?

ANOVA significance indicates the overall regression equation is significant for the set of predictors. But each independent variable can still differ in its individual contribution. The transcript instructs checking the coefficients table for each predictor’s standardized/unstandardized coefficients and, crucially, the t value and p value for each predictor to determine which ones are individually significant.

How does the transcript interpret the increase in R Square when moving from bivariate to multiple regression?

When multiple predictors are added, the model’s R Square increases to 0.581. Interpreted as explained variance, this means 58.1% of the variation in life satisfaction can be accounted for by the combined set of predictors. The transcript pairs this with a higher F value to support overall model significance, then uses coefficients to identify which predictors drive that relationship.

What is the repeatable process for testing multiple hypotheses (H1, H2, H3, etc.) using SPSS?

For each hypothesis, run Analyze → Regression → Linear with the dependent variable set to life satisfaction and the relevant independent variable(s) selected. Copy the hypothesis-specific regression statistics into the results table: the F value and p value from the model significance (ANOVA), plus the relevant beta coefficient(s) and p values from the coefficients table. The transcript notes that bivariate runs use one predictor (so the F corresponds to one-predictor models), while multiple regression runs use multiple predictors (so the coefficients table must be used to report each predictor’s t and p values).

Review Questions

  1. In the bivariate example, which SPSS table provides the explained variance (R Square), and which table provides the overall model significance (p value)?
  2. When multiple predictors are included, what specific output elements determine whether each predictor is individually significant?
  3. How would you structure a results table entry for a hypothesis using unstandardized beta, R Square, F, and p values?

Key Points

  1. 1

    Regression analysis quantifies how much variance in a dependent variable is explained by one or more independent variables.

  2. 2

    Bivariate regression tests one independent variable’s impact; multiple regression tests several independent variables together.

  3. 3

    In SPSS, Model Summary provides R and R Square (explained variance), while ANOVA provides the overall model significance via the regression row’s p value.

  4. 4

    In the bivariate example, R Square = 0.276 indicates 27.6% of life satisfaction variance is explained by servant leadership, and ANOVA significance is reported as <0.01.

  5. 5

    Predictor-level significance is checked in the Coefficients table using t values (with the transcript using 1.96 as a reference cutoff) and p values.

  6. 6

    For multiple regression, overall significance (ANOVA) does not guarantee each predictor is significant; coefficients table t and p values must be examined for each independent variable.

  7. 7

    A consistent reporting workflow is recommended: copy beta (unstandardized), beta, R Square, F, and p values into hypothesis-specific tables, then repeat for H2, H3, H4, and beyond.

Highlights

Servant leadership explains 27.6% of the variance in life satisfaction in the bivariate regression example (R Square = 0.276).
The overall bivariate regression model is statistically significant with an ANOVA significance value reported as 0 (<0.01).
In the bivariate case, the predictor’s significance is confirmed by a large t statistic (t = 9.143, exceeding 1.96).
Adding multiple predictors increases explained variance substantially in the example, with R Square rising to 0.581 (58.1%).
Multiple regression requires checking each predictor’s coefficients table (t and p values) to determine which variables are individually significant.

Topics

  • Regression Analysis
  • SPSS Linear Regression
  • Bivariate vs Multiple Regression
  • Interpreting R Square
  • Reporting Regression Results