#Regression Analysis using SPSS: How to Run, Interpret, and Report the Regression Results in SPSS
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Regression analysis quantifies how much variance in a dependent variable is explained by one or more independent variables.
Briefing
Regression analysis is used to measure how strongly one dependent variable relates to one or more independent variables—and to quantify how much variance in the dependent variable can be explained. The transcript distinguishes two common setups: bivariate regression, which involves two variables (one dependent, one independent), and multiple regression, which includes three or more variables (one dependent plus multiple independents). In both cases, the practical goal is the same: test whether a predictor has a statistically significant impact and report the results in a clear, standard format.
The example centers on life satisfaction as the dependent variable and servant leadership as the independent variable. The workflow in SPSS starts by navigating to Analyze → Regression → Linear, then selecting life satisfaction as the dependent variable and servant leadership as the independent variable. After running the model, the output is interpreted through several key tables. The Model Summary section provides R, R Square, and Adjusted R Square. In the bivariate example, R Square is reported as 0.276, meaning 27.6% of the variation in life satisfaction is accounted for by servant leadership. Whether that explained variance is meaningful is tested using the ANOVA table: the regression row’s significance value is effectively 0 (reported as less than 0.01), indicating the overall regression model is statistically significant.
To interpret the direction and strength of the relationship, the Coefficients table is used. With only one independent variable, the standardized beta is treated as closely aligned with the correlation-like relationship, and the t statistic is compared against a typical cutoff (1.96 for a two-tailed test at the 0.05 level). The transcript notes a t value of 9.143, which exceeds 1.96, supporting the conclusion that servant leadership has a significant positive effect on life satisfaction. For reporting, the transcript recommends copying key statistics into a results table: the regression weight (unstandardized beta coefficient), beta, R Square, F statistics, and the p value.
The process then expands to multiple regression by adding additional independent variables (three predictors in the example). Running Analyze → Regression → Linear again with all predictors increases the model’s explanatory power: the F value rises and R Square increases to 0.581, implying 58.1% of life satisfaction variance is explained by the set of predictors. The transcript emphasizes a distinction between overall model significance and individual predictor significance. The ANOVA significance indicates the model as a whole is significant, but determining which predictors matter requires examining each predictor’s coefficients—especially the t values and p values from the coefficients table. Finally, the same reporting logic applies: copy the relevant coefficients and model statistics into the hypothesis-specific results format (H1, H2, H3, and so on), using the appropriate F and p values for each regression run. The end result is a repeatable SPSS routine for running, interpreting, and writing up regression findings.
Cornell Notes
The transcript lays out a practical SPSS workflow for regression analysis, starting with bivariate regression (one independent variable) and extending to multiple regression (several independent variables). In the example, life satisfaction is the dependent variable and servant leadership is the predictor. For bivariate regression, R Square is 0.276, meaning 27.6% of variance in life satisfaction is explained, and ANOVA significance is reported as 0 (<0.01), indicating a statistically significant model. The Coefficients table is then used to confirm significance of the predictor via t (9.143 > 1.96) and to report the unstandardized beta and p value. For multiple regression, the model’s R Square increases (0.581), and overall significance is checked with ANOVA, while individual predictor significance is checked using each predictor’s t and p values.
What is the difference between bivariate regression and multiple regression, and when should each be used?
How does SPSS output determine whether servant leadership significantly predicts life satisfaction in the bivariate example?
Which statistics should be included when reporting bivariate regression results in a hypothesis table?
In multiple regression, why isn’t ANOVA significance enough to claim every predictor is significant?
How does the transcript interpret the increase in R Square when moving from bivariate to multiple regression?
What is the repeatable process for testing multiple hypotheses (H1, H2, H3, etc.) using SPSS?
Review Questions
- In the bivariate example, which SPSS table provides the explained variance (R Square), and which table provides the overall model significance (p value)?
- When multiple predictors are included, what specific output elements determine whether each predictor is individually significant?
- How would you structure a results table entry for a hypothesis using unstandardized beta, R Square, F, and p values?
Key Points
- 1
Regression analysis quantifies how much variance in a dependent variable is explained by one or more independent variables.
- 2
Bivariate regression tests one independent variable’s impact; multiple regression tests several independent variables together.
- 3
In SPSS, Model Summary provides R and R Square (explained variance), while ANOVA provides the overall model significance via the regression row’s p value.
- 4
In the bivariate example, R Square = 0.276 indicates 27.6% of life satisfaction variance is explained by servant leadership, and ANOVA significance is reported as <0.01.
- 5
Predictor-level significance is checked in the Coefficients table using t values (with the transcript using 1.96 as a reference cutoff) and p values.
- 6
For multiple regression, overall significance (ANOVA) does not guarantee each predictor is significant; coefficients table t and p values must be examined for each independent variable.
- 7
A consistent reporting workflow is recommended: copy beta (unstandardized), beta, R Square, F, and p values into hypothesis-specific tables, then repeat for H2, H3, H4, and beyond.