Get AI summaries of any video or article — Sign up free
LESSON 13 - EXPERIMENTAL DESIGNS: TRUE EXPERIMENTAL AND QUASI EXPERIMENTAL DESIGNS thumbnail

LESSON 13 - EXPERIMENTAL DESIGNS: TRUE EXPERIMENTAL AND QUASI EXPERIMENTAL DESIGNS

5 min read

Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Experimental design is built to support causal claims by manipulating a treatment variable (X) and measuring its effect on an outcome (Y).

Briefing

Experimental design is the quantitative research approach used to establish cause-and-effect: when researchers manipulate an independent variable (the “treatment” or intervention) and observe a change in a dependent variable, they can argue that X caused Y rather than merely occurring alongside it. That causal claim matters because social science explanations often get derailed by coincidence—like assuming that waking up late “caused” missing a bus—when an unseen factor may actually be driving both outcomes. Experimental designs are built to isolate the treatment’s impact by comparing outcomes across groups that differ only in whether they receive the intervention.

A standard experimental setup uses two homogeneous groups: a control group that receives no treatment and an experimental group (treatment group) that does receive the intervention. After the treatment, researchers compare performance between the groups—typically using pre-tests and post-tests to measure conditions before and after the intervention. For example, to test a new teaching methodology, one group of students might be taught using the new method while the other continues with the old approach; student performance is then compared after the semester. The logic is straightforward: if the experimental group changes more (or differently) than the control group, the difference can be attributed to the treatment—provided key threats to causal inference are handled.

Two obstacles must be addressed to make a credible causal conclusion. First are spurious correlations, where variables move together statistically but are not causally linked because a third, unmeasured factor drives both. The classic example is smoking and lung cancer: the relationship may be statistically strong, yet other factors could explain why lung cancer occurs among non-smokers. Second are extraneous variables—other influences that could confound the treatment effect. Researchers control extraneous variables so that the observed outcome change is not actually caused by something else.

Control strategies include randomization (random selection and random assignment) to make groups equivalent and reduce bias, removing a variable by excluding it from the study (though this can narrow generalizability), matching cases when randomization is not feasible, and using subjects as their own controls by measuring the same participants under both conditions (before and after the intervention). In practice, the most rigid control is possible in laboratories, which corresponds to true experimental designs. Social science often relies on field settings where random assignment may not be fully achievable, leading to quasi-experimental designs.

The lesson also lays out the notation and core elements of experimental designs: O represents tests (pre- or post-), X represents the treatment, and R indicates randomization. Odd-numbered O’s typically denote pre-tests and even-numbered O’s denote post-tests, with group labels distinguishing experimental from control conditions. Key elements include manipulation (deliberately applying the planned change), randomization (ensuring equal chances of selection/assignment), and control (introducing procedures that limit extraneous influence). Finally, experimental procedures are grouped into three categories: pre-experimental designs (often one-group pre-test/post-test without a control group), quasi-experimental designs (such as pre-test/post-test control group designs and interrupted time series variants without random assignment), and true experimental designs (including pre-test/post-test control group with randomization, post-test-only control group with randomization, and Solomon four-group designs). The takeaway is that better design choices—especially around control and randomization—are what make causal claims defensible, setting up the next topic: threats to validity.

Cornell Notes

Experimental design is used to establish causality by manipulating an independent variable (treatment X) and measuring its effect on a dependent variable (outcome Y). Credible causal claims require controlling extraneous variables and avoiding spurious correlations, where two variables are related statistically but not causally linked. Control is achieved through methods such as randomization (random selection and random assignment), removing confounding variables, matching participants, and using the same subjects as their own controls. True experimental designs rely on random selection and random assignment, while quasi-experimental designs are used when random assignment is not feasible in field settings. Designs are commonly represented with O (tests), X (treatment), and R (randomization), and they come in pre-experimental, quasi-experimental, and true experimental forms.

What makes an experimental design different from other quantitative designs when it comes to causality?

Experimental designs aim to establish cause-and-effect by manipulating the independent variable as a treatment (X) and observing changes in the dependent variable (Y). Causality here means that changing X leads to an observable change in Y. That causal claim becomes credible only when researchers compare an experimental group that receives the treatment with a control group that does not, and when extraneous variables and spurious correlations are addressed.

How do spurious correlations undermine causal conclusions, and what example illustrates the risk?

A spurious correlation occurs when two variables are statistically related but not causally linked because an unseen third factor causes both. The lesson’s example is smoking and lung cancer: even if smoking correlates with lung cancer, lung cancer also occurs among non-smokers, suggesting other unmeasured variables may be responsible. Without controlling those factors, researchers cannot confidently claim smoking causes lung cancer.

What are extraneous variables, and what does it mean to control them?

Extraneous variables are other influences that can affect the dependent variable and threaten the validity of the treatment effect. Controlling extraneous variables means removing or limiting their impact so the observed outcome change can be attributed to X rather than to a confound. The lesson emphasizes that researchers must ensure no other factors are driving the effect they observe.

Which strategies are used to control extraneous variables when randomization is possible or not possible?

The lesson lists several approaches: (1) Randomization—random selection and random assignment to control and experimental groups to make groups equivalent and reduce bias. (2) Removing a variable—exclude the confound by not studying it, which reduces generalization. (3) Matching cases—pair participants with similar characteristics and assign them to groups when randomization isn’t feasible. (4) Using subjects as their own controls—measure the same participants under both conditions (e.g., before and after the new teaching method) to reduce other differences.

How do true experimental and quasi-experimental designs differ in allocation of participants?

True experimental designs require both random selection and random assignment, allowing stronger control over equivalence between groups. Quasi-experimental designs are used when random selection/assignment cannot be fully implemented—often because researchers work with existing groups in field settings—so allocation may not be random. The lesson links this distinction directly to whether random assignment is feasible.

What do the symbols O, X, and R represent in experimental design notation, and how are pre-tests and post-tests distinguished?

O denotes tests, X denotes the treatment/intervention, and R denotes randomization. Pre-tests are typically represented by earlier O’s (often odd-numbered positions like O1 and O3), administered before the intervention, while post-tests are later O’s (often even-numbered positions like O2 and O4), administered after the intervention. Group labels distinguish which O’s belong to experimental versus control conditions.

Review Questions

  1. Why can a statistically significant relationship still fail to support a causal claim in experimental research?
  2. Describe two different methods for controlling extraneous variables and explain how each affects group equivalence.
  3. Compare pre-experimental, quasi-experimental, and true experimental designs in terms of control groups and randomization.

Key Points

  1. 1

    Experimental design is built to support causal claims by manipulating a treatment variable (X) and measuring its effect on an outcome (Y).

  2. 2

    A control group (no treatment) and an experimental group (treatment) are essential for attributing differences in outcomes to the intervention.

  3. 3

    Spurious correlations arise when variables are related statistically but a third factor drives both, weakening cause-and-effect conclusions.

  4. 4

    Extraneous variables threaten causal inference; controlling them is necessary so the observed effect is not due to confounding influences.

  5. 5

    Randomization (random selection and random assignment) is the strongest method for making groups equivalent and reducing bias.

  6. 6

    When random assignment isn’t feasible in social science field settings, quasi-experimental designs are used instead of true experimental designs.

  7. 7

    Experimental designs use standard notation: O for tests, X for treatment, and R for randomization, with pre-tests and post-tests distinguished by their position in the sequence.

Highlights

Causality in experimental research means manipulating X produces a measurable change in Y—not just that X and Y move together.
Spurious correlation is the key warning sign: statistical association can be caused by an unseen third variable.
Random assignment is treated as the main safeguard for equivalence between control and experimental groups.
True experimental designs require random selection and random assignment; quasi-experimental designs often rely on existing groups without full random allocation.
Experimental notation (O, X, R) encodes when tests occur and whether participants were randomized, making design logic easier to track.

Topics

Mentioned