Get AI summaries of any video or article — Sign up free
Systematic Literature Review. Exclusion and Inclusion Criteria (S4.1) thumbnail

Systematic Literature Review. Exclusion and Inclusion Criteria (S4.1)

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use a two-stage screening process: title/abstract screening to remove clearly irrelevant records, followed by full-text review to confirm eligibility.

Briefing

Systematic literature reviews rise or fall on study selection: once the search is done, researchers must screen thousands of hits down to a defensible set of studies using clear inclusion and exclusion criteria. Because it’s impossible to read hundreds or thousands of papers in full, the process starts with title and abstract screening to remove clearly irrelevant records, then moves to full-text review to confirm whether each potentially eligible study truly meets the predefined criteria. A key point is that exclusion decisions should be based on the study’s characteristics—such as whether it contains the required type of evidence—rather than on convenience or preference.

Documentation is treated as non-negotiable. Every inclusion or exclusion decision needs to be recorded, often in a spreadsheet or table, and reported transparently using a PRISMA-style flow diagram (Preferred Reporting Items for Systematic Reviews and Meta-Analysis). The goal is to make the selection trail auditable: how many records were found, how many were screened, how many were excluded at each stage, and how many ultimately entered the analysis. Screening ideally involves two independent reviewers to reduce bias and improve reliability; when that isn’t feasible, the work can be done by one person but later checked by independent reviewers or experts. Disagreements are resolved through discussion, typically by involving a third reviewer.

The criteria themselves must be tightly aligned with the review’s research objectives. Inclusion criteria define what qualifies a study—examples given include focusing on a specific population, using a particular methodology (such as survey data collected via questionnaires), restricting the time window (e.g., 2001–2025), limiting language (e.g., English), and specifying publication type (e.g., journal articles). Exclusion criteria define what disqualifies studies, such as excluding non-peer-reviewed work (conference papers, viewpoints, editorials), studies lacking primary data, studies relying only on secondary data, or studies with an irrelevant geographic focus. The transcript also highlights a conceptual alignment test: if the review targets outcomes of servant leadership, studies that only examine antecedents (without servant leadership outcomes) should be excluded.

Before full screening, criteria can be pilot-tested on a small subset (around 50 studies) to check clarity and practicality, then refined if the rules prove confusing or too restrictive. All criteria should be justified and explicitly documented.

After selecting studies, the next step is assessing methodological quality and risk of bias. Standard tools are recommended based on study design—such as CASP (Critical Appraisal Skills Programme), tools for systematic reviews like MSAR, and Joanna Briggs Institute instruments. Risk of bias is evaluated across domains including selection bias, performance bias, detection bias, and reporting bias. Some tools produce ratings (high/medium/low), and studies with high risk of bias may be excluded from synthesis or included with caution, with the concern clearly flagged in the analysis.

Concrete examples illustrate how published reviews report screening outcomes. One PRISMA-based ESG review reports a final set of 85 included articles and notes that reference lists were checked for additional eligible studies. Another example describes database searches (including ProQuest, Springer, SAGE, and others), the number of non-duplicate citations screened, exclusions at title/keyword and abstract stages, and the final count included after abstract review. A further example shows that some reviews describe inclusion/exclusion criteria without a diagram, though a flowchart is preferred for reader clarity on what was removed and why.

Cornell Notes

Study selection in a systematic literature review depends on predefined inclusion and exclusion criteria applied in a structured screening process. Researchers first screen titles and abstracts to remove clearly irrelevant records, then review full texts to verify that each study meets the criteria (e.g., empirical evidence, required methodology, time window, language, and publication type). Decisions must be documented at every step and reported transparently, often with a PRISMA flow diagram, so readers can trace how the final set was reached. Screening is ideally done by two independent reviewers; disagreements are resolved through discussion with a third reviewer. After selection, methodological quality and risk of bias are assessed using tools such as CASP or Joanna Briggs Institute instruments, with high-risk studies either excluded or included cautiously.

Why can’t researchers simply read every search result in a systematic review?

Because search queries often return hundreds or thousands of records. The transcript emphasizes that it’s not feasible to read 2,000–3,000 studies (or even 500) in full. Instead, the review uses a staged screening process: title and abstract screening removes clearly irrelevant studies, shrinking the pool to a manageable number for full-text assessment.

What’s the difference between inclusion and exclusion criteria, and how should they connect to the research question?

Inclusion criteria specify what qualifies a study—such as a particular population, a required methodology (e.g., surveys using questionnaires), a defined time period (example given: 2001–2025), language (example: English), and publication type (example: journal articles). Exclusion criteria specify what disqualifies studies—such as non-peer-reviewed work (conference papers, editorials), lack of primary data, secondary-data-only studies, or irrelevant geographic focus. The criteria must align tightly with the review’s objectives; for instance, a review focused on outcomes of servant leadership should exclude studies that only measure antecedents.

How should researchers handle evidence-type mismatches during screening?

Abstract-level screening can exclude studies that don’t contain the required evidence. The transcript gives an example: if the review focuses on empirical studies, a paper whose abstract indicates qualitative interviews without quantitative empirical tools can be excluded because it doesn’t meet the empirical requirement.

What documentation and transparency practices are expected during screening?

Every inclusion/exclusion decision should be recorded, including counts at each stage. The transcript recommends using an Excel sheet, table, or Word document to log decisions, and reporting the process with a PRISMA-style flow diagram. This ensures consistency and transparency, showing how many records were found, screened, excluded (at title/keyword and abstract stages), and included.

How is risk of bias assessed after study selection, and what happens to high-risk studies?

Quality assessment tools are chosen based on study design, such as CASP or Joanna Briggs Institute instruments (and MSAR for systematic reviews). Risk of bias is evaluated across selection, performance, detection, and reporting biases. Some tools rate studies as high/medium/low quality; studies with high risk of bias may be excluded from synthesis or included with caution, with the risk clearly identified in the analysis.

What’s a practical way to refine inclusion/exclusion criteria before full screening?

Pilot testing. The transcript suggests applying the criteria to a small subset (about 50 downloaded studies) during the development phase to test clarity and applicability. Based on what works or fails, researchers can revise the criteria before screening the full set.

Review Questions

  1. What specific information should be captured at each screening stage to support PRISMA-style transparency?
  2. Give two examples of inclusion criteria and two examples of exclusion criteria, and explain how each would be justified by a research objective.
  3. How do risk-of-bias ratings influence whether studies are synthesized or handled with caution?

Key Points

  1. 1

    Use a two-stage screening process: title/abstract screening to remove clearly irrelevant records, followed by full-text review to confirm eligibility.

  2. 2

    Define inclusion criteria (population, methodology, time window, language, publication type) and exclusion criteria (non-peer-reviewed work, lack of primary data, secondary-only data, irrelevant geography) tightly aligned to the research question.

  3. 3

    Document every inclusion/exclusion decision and report the selection trail transparently, commonly via a PRISMA flow diagram.

  4. 4

    Aim for two independent reviewers during screening to reduce bias; resolve disagreements through discussion and a third reviewer when needed.

  5. 5

    Pilot-test inclusion/exclusion criteria on a small subset (around 50 studies) to ensure the rules are clear and workable.

  6. 6

    After selection, assess methodological quality and risk of bias using appropriate tools (e.g., CASP, Joanna Briggs Institute), and treat high-risk studies by excluding them or synthesizing them with explicit caution.

Highlights

Screening is designed to cut thousands of search hits down to a defensible set—first by titles/abstracts, then by full-text eligibility checks.
Inclusion/exclusion criteria must match the review’s objective at a conceptual level (e.g., outcomes vs. antecedents of servant leadership).
Transparency isn’t optional: every decision should be logged and summarized with a PRISMA-style flow diagram.
Risk of bias is assessed across selection, performance, detection, and reporting domains, with high-risk studies handled explicitly in synthesis.

Topics

Mentioned

  • PRISMA
  • CASP
  • MSAR
  • ESG
  • ABS
  • ABDC
  • SAGE
  • Taylor
  • Francis
  • Emerald
  • Wy
  • ProQuest
  • EPSCO