Get AI summaries of any video or article — Sign up free
EXPERIMENTAL DESIGNS: TRUE AND QUASI DESIGNS thumbnail

EXPERIMENTAL DESIGNS: TRUE AND QUASI DESIGNS

5 min read

Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Experimental designs are the primary quantitative method for making defensible cause-and-effect claims when X is manipulated and extraneous variables are controlled.

Briefing

Experimental designs are the research tool for making credible cause-and-effect claims—specifically, establishing that an independent variable (X) causes an outcome (Y) only after researchers control for other factors that could distort the relationship. In everyday speech people use “causes” loosely, but in research the word “cause” demands evidence strong enough to rule out confounding influences. That’s why experimental designs are treated as the most direct quantitative approach for causality, even though they can feel intimidating to students.

The lecture frames causality as a product of manipulation: researchers deliberately change X (the independent variable) and observe whether Y (the dependent variable) changes. The key is not just giving a treatment, but isolating its effect by holding other variables constant—meaning extraneous variables must be identified and controlled so they cannot threaten the inference that X leads to Y. A central warning is that correlations can be misleading: two variables may move together statistically without any causal link, as in the common claim that smoking causes lung cancer when non-smokers still develop lung cancer due to other unseen factors. Experimental logic is designed to prevent that kind of spurious relationship from masquerading as causality.

To make those causal claims, experimental studies rely on several operational requirements. First, participants must be organized into two sufficiently large, homogeneous groups with similar characteristics: a control group that receives no treatment (or a placebo-like alternative) and an experimental group that receives the intervention. Second, researchers must define inclusion and exclusion criteria clearly so the groups are comparable. Third, the treatment effect is assessed by comparing pre-test and post-test outcomes across the two groups; the experimental group’s change is interpreted relative to the control group’s lack of change.

The biggest dividing line between “true” and “quasi” experimental designs is randomization. True experimental designs use random selection and random assignment so participants have an equal chance of being placed in either group, which helps ensure equivalence and reduces bias. Quasi-experimental designs lack full randomization because researchers often work with already formed groups—such as schools, classes, or communities—where it’s impractical to rearrange participants. In those settings, researchers still measure change using pre-tests and post-tests, but the design depends on matching and careful control rather than random assignment.

The lecture also introduces standard experimental notation (O1, O2, O3, O4 and X), where odd positions represent pre-tests and even positions represent post-tests, and X denotes the intervention. It distinguishes three broad categories: pre-experimental (typically one group with pre-test and post-test and no control group), quasi-experimental (two groups without random assignment), and true experimental (two groups with random assignment). Across all of them, the goal remains the same: strengthen internal and external validity by controlling threats so researchers can defend the claim that X causes Y. The session closes by previewing upcoming discussion of threats to internal and external validity and how to mitigate them.

Cornell Notes

Experimental designs are presented as the main quantitative approach for establishing causality—X causing Y—because they rely on deliberate manipulation of the independent variable and systematic control of extraneous factors. Credible causal claims require isolating the treatment’s effect by comparing an experimental group (receiving X) against a control group (not receiving X or receiving a placebo-like alternative), using pre-tests and post-tests. The lecture emphasizes that correlation is not causation and warns about spurious relationships driven by unseen variables. The key distinction between true and quasi-experimental designs is randomization: true experiments use random selection and random assignment, while quasi-experiments use already formed groups where random assignment is not feasible. Pre-experimental designs are also described as one-group pre-test/post-test setups without a control group.

Why does the lecture treat “causality” as a high bar in research rather than a casual claim?

Causality requires more than showing that two variables are statistically related. The lecture stresses that “X causes Y” is only defensible when researchers remove confounding influences—extraneous variables that could also explain changes in Y. Without controlling those factors, the relationship may be spurious: variables can move together without a causal mechanism, as illustrated by the idea that smoking and lung cancer are related statistically even though non-smokers can still develop lung cancer due to other causes.

What makes an experimental design different from other quantitative designs in practice?

Experimental designs aim to discover causal relationships by manipulating X (the independent variable) and observing its effect on Y (the dependent variable). The lecture links causality to manipulation plus control: researchers isolate the treatment’s effect by holding other variables constant as much as possible, which requires identifying and controlling extraneous variables that could confound the X–Y relationship.

How do control and experimental groups work, and why are pre-tests and post-tests central?

Participants are divided into two sufficiently large, homogeneous groups: a control group that does not receive the treatment and an experimental group that receives the intervention. The lecture uses a pre-test/post-test logic: measure outcomes before the intervention (pre-test), apply the treatment only to the experimental group, then measure again after the intervention (post-test). The treatment effect is inferred by comparing changes in the experimental group to the control group’s outcomes.

What does randomization accomplish, and how does it separate true from quasi-experimental designs?

Random selection and random assignment help ensure the groups are as equivalent as possible, reducing bias from pre-existing differences. True experimental designs use both random selection and random assignment, while quasi-experimental designs lack random assignment because researchers often deal with already formed groups (e.g., schools or classes) that cannot be rearranged.

What do the symbols O1, O2, O3, O4 and X mean in the lecture’s experimental notation?

The lecture uses O1–O4 to represent observations/tests and X to represent the intervention. Odd-numbered observations (O1 and O3) are pre-tests; even-numbered observations (O2 and O4) are post-tests. The lower-numbered pair (O1, O2) corresponds to the experimental group, while the higher-numbered pair (O3, O4) corresponds to the control group. X marks the treatment applied to the experimental group.

How can extraneous variables be controlled when randomization isn’t feasible?

When randomization is not feasible, the lecture describes matching/marching cases: researchers try to make groups comparable by selecting participants with similar characteristics. It also notes other control strategies such as removing variables from the study (e.g., restricting the sample to males if gender would confound results). Additionally, quasi-experimental approaches can use within-group comparisons (measuring the same group before and after) to track change over time.

Review Questions

  1. What specific conditions must be met before a researcher can credibly claim that X causes Y?
  2. Compare true and quasi-experimental designs using random selection/assignment and explain why those differences matter for validity.
  3. Using the lecture’s notation, map which observations are pre-tests and post-tests for the experimental and control groups, and explain where X fits.

Key Points

  1. 1

    Experimental designs are the primary quantitative method for making defensible cause-and-effect claims when X is manipulated and extraneous variables are controlled.

  2. 2

    Causality requires isolating the treatment’s effect; correlation alone can reflect spurious relationships driven by unseen variables.

  3. 3

    Experimental studies typically use two homogeneous groups—control and experimental—with clear inclusion/exclusion criteria and sufficient sample size.

  4. 4

    Treatment effects are assessed through pre-test and post-test comparisons between experimental and control groups.

  5. 5

    Random selection and random assignment are the defining features of true experimental designs; quasi-experimental designs lack full randomization due to already formed groups.

  6. 6

    Pre-experimental designs use a single group with pre-test and post-test and lack a control group, limiting causal strength.

  7. 7

    Controlling extraneous variables can involve randomization, removing confounding variables, and matching participants when randomization is not feasible.

Highlights

The lecture draws a strict line between everyday “causes” and research causality: claiming X causes Y requires eliminating confounds, not just observing a relationship.
Randomization is treated as the central mechanism that makes true experiments more credible than quasi-experiments.
Experimental notation (O1–O4 and X) encodes the logic of pre-test/post-test measurement and where the intervention is applied.
Spurious correlation is illustrated through the smoking–lung cancer example to show how statistical association can fail to imply causation.
Quasi-experimental work is justified as doable in social science when researchers must use already formed groups like schools or classes.

Topics

  • Causal Inference
  • True vs Quasi-Experimental
  • Extraneous Variables
  • Randomization
  • Experimental Notation