How to Run Ordinal Logistic Regression in SPSS?
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use ordinal logistic regression when the dependent variable is ordered (e.g., 1–5 interest), not continuous and not purely nominal.
Briefing
Ordinal logistic regression in SPSS is used when the outcome is ordered (like “low to high” or “strongly disagree to strongly agree”), letting researchers estimate how predictors shift the odds of landing in higher versus lower outcome categories. The session’s core workflow is to set up an ordinal dependent variable (“interest” on a 1–5 scale) and mix categorical predictors (assignments, co-curricular activities, gender) with a continuous predictor (age), then run SPSS’s ordinal regression with the logit link and check the proportional odds (parallel lines) assumption.
The example scenario centers on predicting university students’ interest in studies (1 = very low through 5 = very high). The dataset includes 144 respondents, with gender split into 75 male and 69 female. Co-curricular participation is treated as an ordinal predictor with categories such as “very few,” “sometimes,” and “quite often.” Assignments are coded dichotomously (0 = no assignment, 1 = yes). Age enters as a continuous covariate. In SPSS, the analysis is configured under Analyze → Regression → Ordinal, placing the categorical predictors in the “Factors” box and the continuous predictor in “Covariates.” The options are set to use the logit model, and the output requests the “Test of Parallel Lines,” which is essential for validating the ordinal logistic regression framework.
Results are interpreted through several output blocks. First, “Case Processing Summary” reports how many cases are included and the distribution across predictor categories. Next, “Model Fitting Information” and goodness-of-fit statistics assess whether the model improves over a null model with no predictors and whether the assumed model fits the observed data. The session emphasizes that model fit significance should be evaluated using p-values (with the stated rule-of-thumb that significance thresholds differ depending on the specific table). A “Pseudo R Square” (using McFadden’s value) provides an approximate measure of improvement in predicting the ordered outcome.
The key inferential step is “Parameter Estimates,” which links each predictor to changes in the log-odds of being in higher outcome categories. Signs are interpreted in the direction of the outcome: a positive coefficient indicates increasing age (or the relevant predictor level) raises the likelihood of higher interest; a negative coefficient indicates the opposite. For instance, age shows a positive relationship with interest. Assignments show a negative coefficient for the “no assignment” category relative to the reference, implying that students who receive assignments have higher interest than those who do not. Co-curricular participation similarly uses the reference category to interpret how “low” or “sometimes” participation compares with “high,” with the direction of coefficients indicating whether interest is lower or higher.
To make effects more intuitive, the session converts coefficients into odds ratios using exp(coefficient) in Excel. Odds ratios above 1 indicate higher odds of being in a higher interest category for a one-unit increase in the predictor (or for the relevant category versus the reference), while odds ratios below 1 indicate decreasing odds. Finally, the “Test of Parallel Lines” checks the proportional odds assumption: the effect of predictors should be consistent across outcome thresholds. A non-significant p-value supports using ordinal logistic regression; a significant result would require switching to multinomial logistic regression instead.
Cornell Notes
Ordinal logistic regression is appropriate when the dependent variable is ordered, such as interest rated from 1 (very low) to 5 (very high). In SPSS, the dependent variable goes into the model as the outcome, categorical predictors (e.g., assignments, co-curricular activities, gender) are placed as Factors, and continuous predictors (e.g., age) are placed as Covariates. Interpretation relies on Parameter Estimates: coefficient signs indicate whether predictors increase or decrease the odds of being in higher interest categories relative to a reference category. McFadden’s pseudo R square and goodness-of-fit tables assess model adequacy, while the Test of Parallel Lines verifies the proportional odds assumption. If that assumption fails, multinomial logistic regression is the alternative.
When is ordinal logistic regression the right choice instead of linear regression or multinomial logistic regression?
How should predictors be entered in SPSS for an ordinal logistic regression with mixed variable types?
What do the “Test of Parallel Lines” results mean, and why do they matter?
How are coefficient signs interpreted in ordinal logistic regression?
How do odds ratios translate the model’s coefficients into something easier to interpret?
What role do goodness-of-fit measures and pseudo R square play in judging the model?
Review Questions
- In SPSS ordinal regression, which predictors should be placed in “Factors” versus “Covariates,” and why does that distinction matter?
- What does a significant Test of Parallel Lines imply about the proportional odds assumption, and what modeling approach should replace ordinal logistic regression?
- How would you interpret a negative coefficient for a categorical predictor relative to its reference category in terms of odds of higher versus lower outcome levels?
Key Points
- 1
Use ordinal logistic regression when the dependent variable is ordered (e.g., 1–5 interest), not continuous and not purely nominal.
- 2
In SPSS, place categorical predictors in “Factors” and continuous predictors in “Covariates” under Analyze → Regression → Ordinal.
- 3
Set the link function to logit and request “Test of Parallel Lines” to validate the proportional odds assumption.
- 4
Interpret “Parameter Estimates” using coefficient signs and the reference category: positive shifts odds toward higher outcome categories; negative shifts odds toward lower categories.
- 5
Convert coefficients to odds ratios with exp(coefficient) to express effects as multiplicative changes in odds; odds ratios > 1 increase odds of higher categories.
- 6
Evaluate model adequacy using model fitting information (improvement over null), goodness-of-fit statistics, and McFadden’s pseudo R square as an approximate predictive improvement.
- 7
If the Test of Parallel Lines is significant (assumption fails), switch to multinomial logistic regression rather than forcing the ordinal model.