LESSON 13 - EXPERIMENTAL DESIGNS: TRUE EXPERIMENTAL AND QUASI EXPERIMENTAL DESIGNS
Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Experimental design is built to support causal claims by manipulating a treatment variable (X) and measuring its effect on an outcome (Y).
Briefing
Experimental design is the quantitative research approach used to establish cause-and-effect: when researchers manipulate an independent variable (the “treatment” or intervention) and observe a change in a dependent variable, they can argue that X caused Y rather than merely occurring alongside it. That causal claim matters because social science explanations often get derailed by coincidence—like assuming that waking up late “caused” missing a bus—when an unseen factor may actually be driving both outcomes. Experimental designs are built to isolate the treatment’s impact by comparing outcomes across groups that differ only in whether they receive the intervention.
A standard experimental setup uses two homogeneous groups: a control group that receives no treatment and an experimental group (treatment group) that does receive the intervention. After the treatment, researchers compare performance between the groups—typically using pre-tests and post-tests to measure conditions before and after the intervention. For example, to test a new teaching methodology, one group of students might be taught using the new method while the other continues with the old approach; student performance is then compared after the semester. The logic is straightforward: if the experimental group changes more (or differently) than the control group, the difference can be attributed to the treatment—provided key threats to causal inference are handled.
Two obstacles must be addressed to make a credible causal conclusion. First are spurious correlations, where variables move together statistically but are not causally linked because a third, unmeasured factor drives both. The classic example is smoking and lung cancer: the relationship may be statistically strong, yet other factors could explain why lung cancer occurs among non-smokers. Second are extraneous variables—other influences that could confound the treatment effect. Researchers control extraneous variables so that the observed outcome change is not actually caused by something else.
Control strategies include randomization (random selection and random assignment) to make groups equivalent and reduce bias, removing a variable by excluding it from the study (though this can narrow generalizability), matching cases when randomization is not feasible, and using subjects as their own controls by measuring the same participants under both conditions (before and after the intervention). In practice, the most rigid control is possible in laboratories, which corresponds to true experimental designs. Social science often relies on field settings where random assignment may not be fully achievable, leading to quasi-experimental designs.
The lesson also lays out the notation and core elements of experimental designs: O represents tests (pre- or post-), X represents the treatment, and R indicates randomization. Odd-numbered O’s typically denote pre-tests and even-numbered O’s denote post-tests, with group labels distinguishing experimental from control conditions. Key elements include manipulation (deliberately applying the planned change), randomization (ensuring equal chances of selection/assignment), and control (introducing procedures that limit extraneous influence). Finally, experimental procedures are grouped into three categories: pre-experimental designs (often one-group pre-test/post-test without a control group), quasi-experimental designs (such as pre-test/post-test control group designs and interrupted time series variants without random assignment), and true experimental designs (including pre-test/post-test control group with randomization, post-test-only control group with randomization, and Solomon four-group designs). The takeaway is that better design choices—especially around control and randomization—are what make causal claims defensible, setting up the next topic: threats to validity.
Cornell Notes
Experimental design is used to establish causality by manipulating an independent variable (treatment X) and measuring its effect on a dependent variable (outcome Y). Credible causal claims require controlling extraneous variables and avoiding spurious correlations, where two variables are related statistically but not causally linked. Control is achieved through methods such as randomization (random selection and random assignment), removing confounding variables, matching participants, and using the same subjects as their own controls. True experimental designs rely on random selection and random assignment, while quasi-experimental designs are used when random assignment is not feasible in field settings. Designs are commonly represented with O (tests), X (treatment), and R (randomization), and they come in pre-experimental, quasi-experimental, and true experimental forms.
What makes an experimental design different from other quantitative designs when it comes to causality?
How do spurious correlations undermine causal conclusions, and what example illustrates the risk?
What are extraneous variables, and what does it mean to control them?
Which strategies are used to control extraneous variables when randomization is possible or not possible?
How do true experimental and quasi-experimental designs differ in allocation of participants?
What do the symbols O, X, and R represent in experimental design notation, and how are pre-tests and post-tests distinguished?
Review Questions
- Why can a statistically significant relationship still fail to support a causal claim in experimental research?
- Describe two different methods for controlling extraneous variables and explain how each affects group equivalence.
- Compare pre-experimental, quasi-experimental, and true experimental designs in terms of control groups and randomization.
Key Points
- 1
Experimental design is built to support causal claims by manipulating a treatment variable (X) and measuring its effect on an outcome (Y).
- 2
A control group (no treatment) and an experimental group (treatment) are essential for attributing differences in outcomes to the intervention.
- 3
Spurious correlations arise when variables are related statistically but a third factor drives both, weakening cause-and-effect conclusions.
- 4
Extraneous variables threaten causal inference; controlling them is necessary so the observed effect is not due to confounding influences.
- 5
Randomization (random selection and random assignment) is the strongest method for making groups equivalent and reducing bias.
- 6
When random assignment isn’t feasible in social science field settings, quasi-experimental designs are used instead of true experimental designs.
- 7
Experimental designs use standard notation: O for tests, X for treatment, and R for randomization, with pre-tests and post-tests distinguished by their position in the sequence.