EXPERIMENTAL DESIGNS: TRUE AND QUASI DESIGNS
Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Experimental designs are the primary quantitative method for making defensible cause-and-effect claims when X is manipulated and extraneous variables are controlled.
Briefing
Experimental designs are the research tool for making credible cause-and-effect claims—specifically, establishing that an independent variable (X) causes an outcome (Y) only after researchers control for other factors that could distort the relationship. In everyday speech people use “causes” loosely, but in research the word “cause” demands evidence strong enough to rule out confounding influences. That’s why experimental designs are treated as the most direct quantitative approach for causality, even though they can feel intimidating to students.
The lecture frames causality as a product of manipulation: researchers deliberately change X (the independent variable) and observe whether Y (the dependent variable) changes. The key is not just giving a treatment, but isolating its effect by holding other variables constant—meaning extraneous variables must be identified and controlled so they cannot threaten the inference that X leads to Y. A central warning is that correlations can be misleading: two variables may move together statistically without any causal link, as in the common claim that smoking causes lung cancer when non-smokers still develop lung cancer due to other unseen factors. Experimental logic is designed to prevent that kind of spurious relationship from masquerading as causality.
To make those causal claims, experimental studies rely on several operational requirements. First, participants must be organized into two sufficiently large, homogeneous groups with similar characteristics: a control group that receives no treatment (or a placebo-like alternative) and an experimental group that receives the intervention. Second, researchers must define inclusion and exclusion criteria clearly so the groups are comparable. Third, the treatment effect is assessed by comparing pre-test and post-test outcomes across the two groups; the experimental group’s change is interpreted relative to the control group’s lack of change.
The biggest dividing line between “true” and “quasi” experimental designs is randomization. True experimental designs use random selection and random assignment so participants have an equal chance of being placed in either group, which helps ensure equivalence and reduces bias. Quasi-experimental designs lack full randomization because researchers often work with already formed groups—such as schools, classes, or communities—where it’s impractical to rearrange participants. In those settings, researchers still measure change using pre-tests and post-tests, but the design depends on matching and careful control rather than random assignment.
The lecture also introduces standard experimental notation (O1, O2, O3, O4 and X), where odd positions represent pre-tests and even positions represent post-tests, and X denotes the intervention. It distinguishes three broad categories: pre-experimental (typically one group with pre-test and post-test and no control group), quasi-experimental (two groups without random assignment), and true experimental (two groups with random assignment). Across all of them, the goal remains the same: strengthen internal and external validity by controlling threats so researchers can defend the claim that X causes Y. The session closes by previewing upcoming discussion of threats to internal and external validity and how to mitigate them.
Cornell Notes
Experimental designs are presented as the main quantitative approach for establishing causality—X causing Y—because they rely on deliberate manipulation of the independent variable and systematic control of extraneous factors. Credible causal claims require isolating the treatment’s effect by comparing an experimental group (receiving X) against a control group (not receiving X or receiving a placebo-like alternative), using pre-tests and post-tests. The lecture emphasizes that correlation is not causation and warns about spurious relationships driven by unseen variables. The key distinction between true and quasi-experimental designs is randomization: true experiments use random selection and random assignment, while quasi-experiments use already formed groups where random assignment is not feasible. Pre-experimental designs are also described as one-group pre-test/post-test setups without a control group.
Why does the lecture treat “causality” as a high bar in research rather than a casual claim?
What makes an experimental design different from other quantitative designs in practice?
How do control and experimental groups work, and why are pre-tests and post-tests central?
What does randomization accomplish, and how does it separate true from quasi-experimental designs?
What do the symbols O1, O2, O3, O4 and X mean in the lecture’s experimental notation?
How can extraneous variables be controlled when randomization isn’t feasible?
Review Questions
- What specific conditions must be met before a researcher can credibly claim that X causes Y?
- Compare true and quasi-experimental designs using random selection/assignment and explain why those differences matter for validity.
- Using the lecture’s notation, map which observations are pre-tests and post-tests for the experimental and control groups, and explain where X fits.
Key Points
- 1
Experimental designs are the primary quantitative method for making defensible cause-and-effect claims when X is manipulated and extraneous variables are controlled.
- 2
Causality requires isolating the treatment’s effect; correlation alone can reflect spurious relationships driven by unseen variables.
- 3
Experimental studies typically use two homogeneous groups—control and experimental—with clear inclusion/exclusion criteria and sufficient sample size.
- 4
Treatment effects are assessed through pre-test and post-test comparisons between experimental and control groups.
- 5
Random selection and random assignment are the defining features of true experimental designs; quasi-experimental designs lack full randomization due to already formed groups.
- 6
Pre-experimental designs use a single group with pre-test and post-test and lack a control group, limiting causal strength.
- 7
Controlling extraneous variables can involve randomization, removing confounding variables, and matching participants when randomization is not feasible.