Get AI summaries of any video or article — Sign up free
Statistics for Research - L18 - #Correlation Analysis using #SPSS thumbnail

Statistics for Research - L18 - #Correlation Analysis using #SPSS

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Correlation coefficient r measures the direction and strength of a linear relationship between two quantitative variables, with sign indicating direction (positive vs negative).

Briefing

Correlation analysis turns a scatter plot’s “looks related” impression into a numeric measure of how strongly two quantitative variables move together—and whether that relationship is statistically reliable. The key statistic is the correlation coefficient, r (often written as R in output). It captures both direction and strength of a linear relationship: a positive value means higher scores on one variable tend to align with higher scores on the other, while a negative value means increases in one tend to accompany decreases in the other. Because r is designed for linear patterns, it should be interpreted alongside a scatter plot to confirm that the relationship is actually linear.

Interpreting r comes with commonly used—but not universal—guidelines. Values near 0 indicate very weak or weak linear association, while values closer to 1 (or -1) indicate stronger linear association. The session lists thresholds such as: |r| ≤ 0.1 as very weak, 0.1 to 0.3 as weak, 0.3 to 0.5 as moderate, 0.5 to 0.7 as strong, and above 0.7 as very strong. It also flags a practical warning: correlations above about 0.85 can signal multicollinearity, meaning two constructs may be measuring nearly the same underlying concept.

Beyond strength, significance matters. In SPSS correlation output, the p-value is used to judge whether the observed association is likely to reflect a real relationship in the population rather than sampling noise. The workflow follows a typical social-science rule: if p < 0.05, the relationship is treated as statistically significant (rejecting the null hypothesis in favor of an alternate hypothesis). The session emphasizes that significance and strength are different questions—one tells whether the relationship is detectable, the other tells how strong it is.

An example is run in SPSS using two variables: “Vision” and “Organizational Performance.” After creating composite variables by averaging multiple items, the correlation output shows a positive correlation of about r = 0.622. The p-value is reported as less than 0.01, supporting statistical significance. The magnitude falls in the “strong” range based on the provided guidelines, and the scatter plot aligns with this interpretation: points cluster upward, indicating that as Vision increases, Organizational Performance tends to increase as well.

The session also highlights several properties of correlation coefficients. r does not change when the measurement units of variables change, because it standardizes the relationship. It also measures only linear association and ignores non-linear patterns, so checking the scatter plot and adding a fitted line (e.g., “fit line total”) helps verify linearity. When reporting results, the session recommends including the correlation coefficient, sample size (N), and p-value, and stating the direction (positive or negative) and strength category.

For more than two variables, the approach shifts to a correlation matrix. In SPSS, adding a third variable produces a table of pairwise correlations. The session advises using one-tailed tests when the direction of the relationship is justified by prior literature (positive or negative), and two-tailed tests when direction is uncertain. For write-ups, the matrix should be formatted for clarity (e.g., removing redundant cells and nonessential entries), and only statistically significant or substantively important correlations should be described—while noting when no other significant relationships appear.

Cornell Notes

Correlation coefficient r quantifies the direction and strength of a linear relationship between two quantitative variables. Positive r indicates that as one variable increases, the other tends to increase; negative r indicates the opposite. Common interpretation cutoffs classify r as very weak/weak/moderate/strong/very strong, but the session stresses these are guidelines and that context matters. Statistical significance is assessed using the p-value (typically p < 0.05, with the example using p < 0.01). In SPSS, scatter plots and fitted lines help confirm linearity, and correlation matrices extend the method to multiple variables, with one-tailed or two-tailed testing chosen based on prior expectations.

How does the sign of the correlation coefficient (r) change what the relationship means?

The sign indicates direction. A negative r means an inverse relationship: increases in one variable tend to coincide with decreases in the other. A positive r means a direct relationship: increases in one variable tend to coincide with increases in the other. In the example, the correlation between Vision and Organizational Performance is positive (no negative sign), matching an upward trend in the scatter plot.

Why isn’t the correlation coefficient alone enough to confirm the relationship is linear?

Correlation measures linear association only, and it can’t by itself confirm that the pattern is truly linear. The session recommends checking the scatter plot and adding a fitted line (e.g., “fit line total”). When points cluster closely around that line, the relationship is more plausibly linear; if they curve or scatter widely around the line, the correlation may not reflect a linear pattern.

What role does the p-value play in correlation results, and how is it interpreted?

The p-value addresses significance: whether the observed correlation is likely to reflect a real relationship rather than random sampling variation. The session uses a common social-science rule: p < 0.05 indicates statistical significance (reject the null hypothesis). In the example, p is less than 0.01, so the relationship is treated as significant.

What does it mean when r is very high (e.g., above 0.85)?

Very high correlations can indicate multicollinearity, where two constructs are measuring nearly the same underlying concept. The session warns that correlations above about 0.85 may create issues because the variables are not meaningfully distinct for modeling or interpretation.

How should correlation results be reported in a two-variable case versus a multi-variable case?

For two variables, report the correlation coefficient r, the sample size N, and the p-value, along with direction (positive/negative) and strength category (e.g., strong). For more than two variables, use a correlation matrix and describe only the pairwise correlations that are significant or substantively important, noting when no other significant correlations are found.

When should one-tailed versus two-tailed testing be used in SPSS correlation matrices?

Use one-tailed testing when prior literature justifies a specific direction (expecting positive or negative relationships). Use two-tailed testing when the direction is uncertain and either positive or negative association could be plausible.

Review Questions

  1. What information does r provide that a scatter plot alone cannot, and what information does it miss that the scatter plot can reveal?
  2. Using the provided guidelines, how would you classify a correlation of r = -0.45, and what would that imply about direction and strength?
  3. In SPSS correlation output, what three values should be included when writing up a correlation between two variables, and why?

Key Points

  1. 1

    Correlation coefficient r measures the direction and strength of a linear relationship between two quantitative variables, with sign indicating direction (positive vs negative).

  2. 2

    Common strength guidelines classify |r| near 0 as very weak/weak, 0.3–0.5 as moderate, 0.5–0.7 as strong, and above 0.7 as very strong, but context can change interpretation.

  3. 3

    Statistical significance is assessed with the p-value; p < 0.05 is treated as significant, and the example uses p < 0.01.

  4. 4

    Correlation is unit-invariant and captures only linear association, so scatter plots (with fitted lines) are needed to verify linearity.

  5. 5

    Correlations above about 0.85 may indicate multicollinearity, suggesting two constructs overlap heavily.

  6. 6

    For two variables, report r, N, and p-value; for multiple variables, use a correlation matrix and describe only significant or important pairwise relationships.

  7. 7

    Choose one-tailed tests when the direction is justified by prior research; use two-tailed tests when direction is uncertain.

Highlights

A positive r means increases in one variable tend to align with increases in the other; a negative r signals the opposite pattern.
The example correlation between Vision and Organizational Performance is about r = 0.622 with p < 0.01, supporting a strong, statistically significant positive linear relationship.
Correlation coefficients don’t confirm linearity—scatter plots with fitted lines are needed to validate the linear assumption.
Very high correlations (above ~0.85) can flag multicollinearity, where constructs may be essentially measuring the same thing.
In multi-variable settings, correlation matrices require careful reporting: focus on significant or substantively meaningful pairs, not every cell.

Topics