Get AI summaries of any video or article — Sign up free
Regression Analysis Using SPSS thumbnail

Regression Analysis Using SPSS

Research and Analysis·
5 min read

Based on Research and Analysis's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Regression analysis models how predictor variables relate to a dependent (outcome) variable using a line equation y = a + bX.

Briefing

Regression analysis is a statistical method for estimating how one or more predictor variables are associated with—and can be used to explain changes in—a dependent (outcome) variable. In its simplest form, the relationship is written as y = a + bX, where y is the dependent variable, X is the independent variable, a is the y-intercept, and b is the slope. The slope matters because it quantifies the direction and size of the relationship: a positive slope means increases in the predictor correspond to increases in the outcome, while a negative slope means increases in the predictor correspond to decreases in the outcome.

A key distinction underpins how results should be interpreted: correlation measures the strength of association between two variables, while regression focuses on how one variable affects another. Correlation is symmetric—correlating X with Y gives the same result as correlating Y with X—whereas regression is directional. Running regression with X as the predictor and Y as the outcome can produce different results than swapping the roles, because regression models a specific dependent variable. Graphically, correlation is often represented as a single point (or a correlation coefficient), while linear regression is represented by a line.

Several statistics guide interpretation. The coefficient of determination, commonly called R² (r square), indicates how much variation in the dependent variable can be explained by the predictor(s). In plain terms, R² describes how much of the “difference” in the outcome is accounted for by the model’s predictors. Another central output is the regression coefficient (beta), which indicates how much the dependent variable changes for a one-unit change in the predictor, assuming other model conditions hold. Beta can be positive or negative, and statistical significance determines whether the observed relationship is likely to reflect a real effect rather than random variation. For example, a beta of 0.90 (and significant) implies that each one-unit increase in the predictor is associated with a 0.90-unit increase in the outcome; a beta of -0.90 (and significant) implies a 0.90-unit decrease.

The practical walkthrough shows how to run these analyses in SPSS. The workflow starts under Analyze, then Regression, then Linear. For simple (single-predictor) regression, the dependent variable is placed in the Dependent box and the predictor in the Independent(s) box. Model fit options such as R² change can be selected, but the default setup is often sufficient for hypothesis testing. The output reports R² and the beta coefficients, including standardized beta and significance. In the example, transfer of training is the dependent variable and training retention is the independent variable. The model fit shows an R² of 0.223, and the beta value is 0.504 with a standardized beta of 0.473, marked significant—interpreted as: for each one-unit increase in training retention, transfer of training increases by about 0.473 units (using the standardized coefficient).

For multiple regression, the same SPSS path is used, but all independent variables are entered together. The output then provides a single R² for the overall model and separate beta and significance values for each predictor. In the example with three independent variables, the overall R² is 0.298. Only training retention shows a significant relationship with transfer of training, while the other predictors have insignificant coefficients—meaning their effects are not supported by the data under the model’s assumptions.

Cornell Notes

Regression analysis models how predictors relate to a dependent (outcome) variable using a line equation like y = a + bX. Correlation measures association strength, but regression is directional and focuses on how changes in predictors correspond to changes in the outcome. R² (coefficient of determination) indicates how much variation in the outcome the model explains, while the beta coefficient shows the direction and size of the predictor’s effect (positive or negative) and significance indicates whether the effect is likely real. In SPSS, the analysis is run via Analyze → Regression → Linear, placing the outcome in Dependent and predictors in Independent(s). The example reports R² values (0.223 for simple regression; 0.298 for multiple regression) and identifies which predictors are statistically significant.

How is regression different from correlation, and why does variable order matter?

Correlation quantifies the degree of association between two variables and is symmetric: correlating X with Y matches correlating Y with X. Regression is directional because it models a specific dependent variable as a function of predictors. Swapping roles (predicting X from Y versus predicting Y from X) changes the regression setup and can yield different results. Graphically, correlation is summarized by a coefficient, while linear regression is represented by a line.

What do R² and beta tell you in regression output?

R² (coefficient of determination) shows how much of the variation in the dependent variable is explained by the predictor(s). Beta is the regression coefficient (slope) that indicates how much the dependent variable changes for a one-unit increase in a predictor. Beta can be positive (predictor increases correspond to outcome increases) or negative (predictor increases correspond to outcome decreases). Statistical significance determines whether the relationship is supported beyond random chance.

In the example, what does a significant standardized beta mean?

The example’s simple regression uses transfer of training as the dependent variable and training retention as the predictor. The standardized beta is 0.473 and is significant, which is interpreted as: for each one-unit increase in the predictor (in standardized terms), transfer of training increases by about 0.473 units, with significance indicating the association is statistically reliable.

How do you run simple linear regression in SPSS?

Use Analyze → Regression → Linear. Place the outcome variable in the Dependent box and the predictor in the Independent(s) box. Optionally select model fit statistics like R² change under Statistics. Then run the analysis (OK) and read the output for R², beta coefficients, standardized beta, and significance.

How does multiple regression change the SPSS setup and interpretation?

Multiple regression still uses Analyze → Regression → Linear, but all independent variables are entered into the Independent(s) box at once. The output provides an overall R² for the combined model and separate beta and significance values for each predictor. In the example, only training retention is significant, while the other two predictors have insignificant relationships with transfer of training.

Review Questions

  1. When would swapping the roles of X and Y change the regression results, and why doesn’t it change correlation results?
  2. How do you interpret a positive beta versus a negative beta when the coefficient is statistically significant?
  3. What does R² measure, and how would you use it to compare a simple regression model to a multiple regression model?

Key Points

  1. 1

    Regression analysis models how predictor variables relate to a dependent (outcome) variable using a line equation y = a + bX.

  2. 2

    Correlation measures association strength, while regression models directional relationships and focuses on how changes in predictors correspond to changes in the outcome.

  3. 3

    R² (coefficient of determination) quantifies how much variation in the dependent variable the model explains.

  4. 4

    The beta coefficient indicates the direction and magnitude of a predictor’s effect; significance determines whether that effect is statistically supported.

  5. 5

    In SPSS, simple linear regression is run via Analyze → Regression → Linear, with the outcome in Dependent and predictors in Independent(s).

  6. 6

    In multiple regression, all predictors are entered together, and significance is checked for each predictor individually to identify which variables matter.

  7. 7

    Overall model fit (R²) can increase when adding predictors, but individual predictors may still be insignificant.

Highlights

Regression is directional: predicting Y from X can differ from predicting X from Y, unlike correlation.
R² tells how much of the outcome’s variation is explained, while beta tells how strongly each predictor shifts the outcome.
SPSS regression output includes both unstandardized beta and standardized beta, plus significance for hypothesis testing.
In the example, transfer of training is explained by training retention with R² = 0.223 in simple regression.
In multiple regression, the overall R² rises to 0.298, but only training retention remains significant among the predictors.

Topics

  • Regression Analysis
  • SPSS Linear Regression
  • R² Interpretation
  • Beta Coefficients
  • Simple vs Multiple Regression