Get AI summaries of any video or article — Sign up free
Step-by-step process to a systematic literature review for a Q1 journal (in-depth training) thumbnail

Step-by-step process to a systematic literature review for a Q1 journal (in-depth training)

Academic English Now·
6 min read

Based on Academic English Now's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Only pursue a review topic when there is a large evidence base, unresolved/conflicting findings, and no recent comprehensive review covering the same scope.

Briefing

Systematic literature reviews become publishable in Q1 journals when researchers start with a real research gap and then follow a tightly documented, step-by-step process that stays replicable. The core idea is straightforward: only topics with a large existing evidence base, unresolved or conflicting findings, and no recent comprehensive review are worth turning into a review paper—and once that foundation is set, every search, screening, and data-extraction decision must be recorded so the final synthesis is credible.

The training frames review papers as a practical publishing route for PhD students because the heavy lifting—reading and mapping the literature—often already happens during a thesis. The difference is that a review paper requires systematic organization and synthesis rather than new experiments. Review papers also tend to attract heavy citation since they function as “go-to” references for the state of a field, especially in fast-moving areas like medicine where systematic reviews and meta-analyses guide future research and practice.

When deciding whether a review paper is appropriate, three conditions are emphasized. First, the topic needs a substantial body of literature—described as hundreds of papers. Second, the field must show uncertainty or inconsistency, such as studies where an intervention sometimes works and sometimes fails to produce significant results. Third, the topic should not have been reviewed recently; if a review exists, the scope may need to shift to a different segment or update the evidence.

The “review paper family” is then broken into three main types. A scoping review is exploratory and works best for broader questions; it can also help determine whether a later systematic review is justified. A systematic review narrows to a specific question, often focused on the effectiveness of an intervention, and synthesizes findings without necessarily producing a single numeric estimate. A meta-analysis goes further by applying statistical analysis to combine comparable studies and calculate effect sizes—meaning it requires statistical competence and careful selection of studies with sufficiently similar designs, populations, and outcomes.

The step-by-step workflow for systematic reviews (and the same underlying structure for scoping reviews and meta-analyses) starts with identifying the research gap and converting it into research questions. Next comes keyword generation: begin with broad terms, expand into specific terms and synonyms (e.g., thesis/dissertation), use wildcards, and then narrow to roughly five to ten high-value keywords. Then researchers set inclusion and exclusion criteria to keep search results feasible and relevant—such as limiting publication types, time windows, populations, regions, methodologies, or study features like control groups or placebo use.

Search execution follows: choose a small set of databases (typically three to five), combine keywords with connectors (AND/OR/NOT) and wildcards, and maintain a thorough search record so others can replicate it. Results are narrowed through screening titles and abstracts, ideally with a second person to reduce bias. Data extraction and analysis come next using structured note-taking tables that capture study aims, methods, and key findings, with grouping into themes or research questions as patterns emerge. Finally, synthesis and presentation should not list every result; it must organize evidence by themes and deliver clear interpretation—avoiding “waffling,” where descriptions replace conclusions.

Writing is guided by a blueprint with a fixed order: introduction (importance, background, gap), methodology (sample/search details, inclusion/exclusion, analysis approach), results/discussion (organized synthesis with interpretation), and conclusion (main findings, practical implications, limitations, and future research). The training also stresses planning and deadline setting, working backward from submission dates, and using ready-made language resources to keep writing concise and argument-driven.

Cornell Notes

A systematic literature review is worth pursuing only when three conditions align: a large existing evidence base (hundreds of studies), unresolved or conflicting findings, and no recent comprehensive review. From that research gap, researchers derive clear research questions, generate targeted keywords (including synonyms and wildcards), and apply explicit inclusion/exclusion criteria to keep the search manageable and valid. The search must be replicable: select a few databases, use structured search strings, record hit counts before and after filters, and screen titles/abstracts—ideally with a second reviewer to reduce bias. Data extraction then organizes findings into themes, and the final synthesis must interpret patterns rather than list study-by-study results. The blueprint for writing follows a fixed structure: introduction, methodology, results/discussion, and conclusion.

What three conditions make a topic suitable for a systematic review rather than a general literature survey?

The training highlights three requirements: (1) a substantial body of literature—described as hundreds of papers—so the review can be meaningful; (2) uncertainty or lack of clarity in prior studies, such as interventions that show mixed outcomes (significant effects in some studies, no significant results or negative findings in others); and (3) no recent review on the same topic, or at least not one that covers the full scope needed—otherwise the work would duplicate what already exists.

How do scoping reviews, systematic reviews, and meta-analyses differ in purpose and output?

A scoping review is exploratory and uses broader questions; it can also help decide whether a systematic review is warranted later. A systematic review narrows to a specific question (often effectiveness of an intervention) and synthesizes evidence to indicate whether something works, but it does not necessarily produce a single numeric estimate. A meta-analysis uses statistical methods to combine comparable studies and calculate effect sizes, which requires selecting studies with sufficiently similar designs, populations, and outcomes and having statistical competence.

Why are inclusion and exclusion criteria treated as a core step rather than an afterthought?

Without criteria, keyword searches can return tens of thousands of irrelevant results, making the project infeasible and weakening validity. Criteria narrow the evidence to what fits the review’s purpose—examples include limiting publication types (excluding conference papers or book chapters), restricting years (last 10 or 20 years), selecting specific populations (e.g., master students rather than PhD students), focusing on regions, restricting methodologies, or requiring specific study features like control groups or placebo use.

What does “replicable search” mean in practice during the database search phase?

Replicability comes from documentation: choosing a small set of databases (typically three to five), using structured connectors (AND/OR/NOT) and wildcards (e.g., an asterisk to capture word families), and recording the search strings and initial hit counts. After applying filters (years, publication type, inclusion/exclusion rules), researchers record the remaining hit counts and note the date and database used. A search tracker is recommended to keep all these numbers and decisions in one place for later methodology reporting.

How should the final synthesis be written to avoid “waffling”?

The training warns against describing study-by-study details without a clear endpoint. The review’s job is to synthesize and interpret: group findings by themes or research questions, extract key points, and explain what the combined evidence means. “Waffling” is treated as descriptive writing that lacks a clear pyramid apex (an argument or conclusion), so the reader never gets a decisive interpretation.

What is the fixed blueprint order for writing the review paper sections?

The blueprint uses a consistent order: (1) Introduction—topic importance, brief literature background, and identification of the research gap; (2) Methodology—research sample and sampling approach (keywords, databases, hit counts), explicit inclusion/exclusion criteria and how they were applied, and the data analysis approach; (3) Results and Discussion—organized synthesis by themes or research questions with interpretation; (4) Conclusion—summary of main findings, practical implications (where appropriate), limitations, and suggestions for future research.

Review Questions

  1. What specific evidence signals a “research gap” and how does that gap translate into research questions?
  2. Describe the sequence from keyword generation to screening to data extraction, including what must be recorded for methodological transparency.
  3. How does a meta-analysis’s evidence selection differ from a systematic review’s, and why does that matter for effect sizes?

Key Points

  1. 1

    Only pursue a review topic when there is a large evidence base, unresolved/conflicting findings, and no recent comprehensive review covering the same scope.

  2. 2

    Convert the research gap into precise research questions before building the search strategy.

  3. 3

    Generate keywords systematically using synonyms, wildcards, and iterative narrowing to about five to ten high-value terms.

  4. 4

    Use explicit inclusion/exclusion criteria to control search volume and ensure the evidence matches the review’s purpose (e.g., time window, population, methodology, control/placebo requirements).

  5. 5

    Run searches across a small set of databases (typically three to five) using structured connectors and document hit counts before and after filters for replicability.

  6. 6

    Screen titles and abstracts to reach the final study set, ideally using two reviewers to reduce bias.

  7. 7

    Write the synthesis as interpretation, not a catalogue of findings—organize by themes and maintain a clear “pyramid apex” conclusion.

Highlights

A review paper is positioned as a practical publishing path because thesis literature work can be reorganized into a systematic, publishable synthesis without collecting new experimental data.
Scoping reviews fit broader questions and can inform whether a later systematic review is justified; meta-analyses require statistical pooling and careful comparability to compute effect sizes.
Replicability hinges on recording search strings, databases, dates, and hit counts at each filtering stage so the methodology can be repeated.
The biggest writing failure mode is “waffling”—describing studies without a clear interpretive endpoint—so the synthesis must explain what the combined evidence means.
The writing blueprint follows a fixed order: introduction (importance, background, gap), methodology (sample/search/inclusion-exclusion/analysis), results & discussion (organized synthesis), and conclusion (findings, implications, limitations, future research).

Topics