Get AI summaries of any video or article — Sign up free
How to Run Ordinal Logistic Regression in SPSS? thumbnail

How to Run Ordinal Logistic Regression in SPSS?

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use ordinal logistic regression when the dependent variable is ordered (e.g., 1–5 interest), not continuous and not purely nominal.

Briefing

Ordinal logistic regression in SPSS is used when the outcome is ordered (like “low to high” or “strongly disagree to strongly agree”), letting researchers estimate how predictors shift the odds of landing in higher versus lower outcome categories. The session’s core workflow is to set up an ordinal dependent variable (“interest” on a 1–5 scale) and mix categorical predictors (assignments, co-curricular activities, gender) with a continuous predictor (age), then run SPSS’s ordinal regression with the logit link and check the proportional odds (parallel lines) assumption.

The example scenario centers on predicting university students’ interest in studies (1 = very low through 5 = very high). The dataset includes 144 respondents, with gender split into 75 male and 69 female. Co-curricular participation is treated as an ordinal predictor with categories such as “very few,” “sometimes,” and “quite often.” Assignments are coded dichotomously (0 = no assignment, 1 = yes). Age enters as a continuous covariate. In SPSS, the analysis is configured under Analyze → Regression → Ordinal, placing the categorical predictors in the “Factors” box and the continuous predictor in “Covariates.” The options are set to use the logit model, and the output requests the “Test of Parallel Lines,” which is essential for validating the ordinal logistic regression framework.

Results are interpreted through several output blocks. First, “Case Processing Summary” reports how many cases are included and the distribution across predictor categories. Next, “Model Fitting Information” and goodness-of-fit statistics assess whether the model improves over a null model with no predictors and whether the assumed model fits the observed data. The session emphasizes that model fit significance should be evaluated using p-values (with the stated rule-of-thumb that significance thresholds differ depending on the specific table). A “Pseudo R Square” (using McFadden’s value) provides an approximate measure of improvement in predicting the ordered outcome.

The key inferential step is “Parameter Estimates,” which links each predictor to changes in the log-odds of being in higher outcome categories. Signs are interpreted in the direction of the outcome: a positive coefficient indicates increasing age (or the relevant predictor level) raises the likelihood of higher interest; a negative coefficient indicates the opposite. For instance, age shows a positive relationship with interest. Assignments show a negative coefficient for the “no assignment” category relative to the reference, implying that students who receive assignments have higher interest than those who do not. Co-curricular participation similarly uses the reference category to interpret how “low” or “sometimes” participation compares with “high,” with the direction of coefficients indicating whether interest is lower or higher.

To make effects more intuitive, the session converts coefficients into odds ratios using exp(coefficient) in Excel. Odds ratios above 1 indicate higher odds of being in a higher interest category for a one-unit increase in the predictor (or for the relevant category versus the reference), while odds ratios below 1 indicate decreasing odds. Finally, the “Test of Parallel Lines” checks the proportional odds assumption: the effect of predictors should be consistent across outcome thresholds. A non-significant p-value supports using ordinal logistic regression; a significant result would require switching to multinomial logistic regression instead.

Cornell Notes

Ordinal logistic regression is appropriate when the dependent variable is ordered, such as interest rated from 1 (very low) to 5 (very high). In SPSS, the dependent variable goes into the model as the outcome, categorical predictors (e.g., assignments, co-curricular activities, gender) are placed as Factors, and continuous predictors (e.g., age) are placed as Covariates. Interpretation relies on Parameter Estimates: coefficient signs indicate whether predictors increase or decrease the odds of being in higher interest categories relative to a reference category. McFadden’s pseudo R square and goodness-of-fit tables assess model adequacy, while the Test of Parallel Lines verifies the proportional odds assumption. If that assumption fails, multinomial logistic regression is the alternative.

When is ordinal logistic regression the right choice instead of linear regression or multinomial logistic regression?

It fits when the outcome is ordinal—ordered categories like “low to high” or a Likert scale from “strongly disagree” to “strongly agree.” Linear regression assumes a continuous outcome, while multinomial logistic regression treats categories as nominal (no ordering). Ordinal logistic regression keeps the order and estimates how predictors shift the odds of being in higher versus lower outcome levels.

How should predictors be entered in SPSS for an ordinal logistic regression with mixed variable types?

Categorical predictors are entered under “Factors” and continuous predictors under “Covariates.” In the example, interest (1–5) is the dependent ordinal variable. Assignments (0/1) and gender (0/1) are categorical factors, co-curricular activities is treated as an ordinal categorical factor, and age is continuous and placed in covariates.

What do the “Test of Parallel Lines” results mean, and why do they matter?

They test the proportional odds (parallel lines) assumption: the relationship between predictors and the odds of being in higher versus lower outcome categories should be consistent across all thresholds of the dependent variable. The session notes that the assumption is supported when the p-value is insignificant; if it is significant, the model’s ordering assumption breaks and multinomial logistic regression should be used instead.

How are coefficient signs interpreted in ordinal logistic regression?

Signs are interpreted in the direction of higher outcome categories. A positive coefficient indicates increasing odds of being in higher interest categories as the predictor increases (or as the category level changes versus the reference). A negative coefficient indicates increasing odds of being in lower categories. For categorical predictors, interpretation depends on the reference category: coefficients compare each category to that baseline.

How do odds ratios translate the model’s coefficients into something easier to interpret?

Odds ratios are computed as exp(coefficient). An odds ratio greater than 1 means higher odds of being in a higher outcome category for a one-unit increase in the predictor (or for that category relative to the reference). An odds ratio less than 1 means decreasing odds. The session also notes that odds ratios cannot be computed for the reference category because it has no coefficient in the comparison.

What role do goodness-of-fit measures and pseudo R square play in judging the model?

“Model fitting information” checks whether the model improves significantly over a null model with no predictors. Goodness-of-fit statistics assess how well observed data match the fitted assumed model, using p-values as decision criteria. McFadden’s pseudo R square provides an approximate measure of improvement in predicting the ordered outcome, not a direct “explained variance” like R square in linear regression.

Review Questions

  1. In SPSS ordinal regression, which predictors should be placed in “Factors” versus “Covariates,” and why does that distinction matter?
  2. What does a significant Test of Parallel Lines imply about the proportional odds assumption, and what modeling approach should replace ordinal logistic regression?
  3. How would you interpret a negative coefficient for a categorical predictor relative to its reference category in terms of odds of higher versus lower outcome levels?

Key Points

  1. 1

    Use ordinal logistic regression when the dependent variable is ordered (e.g., 1–5 interest), not continuous and not purely nominal.

  2. 2

    In SPSS, place categorical predictors in “Factors” and continuous predictors in “Covariates” under Analyze → Regression → Ordinal.

  3. 3

    Set the link function to logit and request “Test of Parallel Lines” to validate the proportional odds assumption.

  4. 4

    Interpret “Parameter Estimates” using coefficient signs and the reference category: positive shifts odds toward higher outcome categories; negative shifts odds toward lower categories.

  5. 5

    Convert coefficients to odds ratios with exp(coefficient) to express effects as multiplicative changes in odds; odds ratios > 1 increase odds of higher categories.

  6. 6

    Evaluate model adequacy using model fitting information (improvement over null), goodness-of-fit statistics, and McFadden’s pseudo R square as an approximate predictive improvement.

  7. 7

    If the Test of Parallel Lines is significant (assumption fails), switch to multinomial logistic regression rather than forcing the ordinal model.

Highlights

Ordinal logistic regression keeps the ordering of outcomes and estimates how predictors shift the odds of being in higher versus lower categories.
SPSS’s “Test of Parallel Lines” is the gatekeeper for the proportional odds assumption; failing it means the ordinal model is not appropriate.
Odds ratios derived as exp(coefficient) turn log-odds effects into interpretable multiplicative changes in the odds of higher interest.
Interpretation of categorical predictors depends on the reference category, so the sign alone must be read as a comparison to that baseline.

Topics

  • Ordinal Logistic Regression
  • SPSS Setup
  • Proportional Odds Assumption
  • Odds Ratios
  • Parameter Estimates