Step-by-step process to a systematic literature review for a Q1 journal (in-depth training)
Based on Academic English Now's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Only pursue a review topic when there is a large evidence base, unresolved/conflicting findings, and no recent comprehensive review covering the same scope.
Briefing
Systematic literature reviews become publishable in Q1 journals when researchers start with a real research gap and then follow a tightly documented, step-by-step process that stays replicable. The core idea is straightforward: only topics with a large existing evidence base, unresolved or conflicting findings, and no recent comprehensive review are worth turning into a review paper—and once that foundation is set, every search, screening, and data-extraction decision must be recorded so the final synthesis is credible.
The training frames review papers as a practical publishing route for PhD students because the heavy lifting—reading and mapping the literature—often already happens during a thesis. The difference is that a review paper requires systematic organization and synthesis rather than new experiments. Review papers also tend to attract heavy citation since they function as “go-to” references for the state of a field, especially in fast-moving areas like medicine where systematic reviews and meta-analyses guide future research and practice.
When deciding whether a review paper is appropriate, three conditions are emphasized. First, the topic needs a substantial body of literature—described as hundreds of papers. Second, the field must show uncertainty or inconsistency, such as studies where an intervention sometimes works and sometimes fails to produce significant results. Third, the topic should not have been reviewed recently; if a review exists, the scope may need to shift to a different segment or update the evidence.
The “review paper family” is then broken into three main types. A scoping review is exploratory and works best for broader questions; it can also help determine whether a later systematic review is justified. A systematic review narrows to a specific question, often focused on the effectiveness of an intervention, and synthesizes findings without necessarily producing a single numeric estimate. A meta-analysis goes further by applying statistical analysis to combine comparable studies and calculate effect sizes—meaning it requires statistical competence and careful selection of studies with sufficiently similar designs, populations, and outcomes.
The step-by-step workflow for systematic reviews (and the same underlying structure for scoping reviews and meta-analyses) starts with identifying the research gap and converting it into research questions. Next comes keyword generation: begin with broad terms, expand into specific terms and synonyms (e.g., thesis/dissertation), use wildcards, and then narrow to roughly five to ten high-value keywords. Then researchers set inclusion and exclusion criteria to keep search results feasible and relevant—such as limiting publication types, time windows, populations, regions, methodologies, or study features like control groups or placebo use.
Search execution follows: choose a small set of databases (typically three to five), combine keywords with connectors (AND/OR/NOT) and wildcards, and maintain a thorough search record so others can replicate it. Results are narrowed through screening titles and abstracts, ideally with a second person to reduce bias. Data extraction and analysis come next using structured note-taking tables that capture study aims, methods, and key findings, with grouping into themes or research questions as patterns emerge. Finally, synthesis and presentation should not list every result; it must organize evidence by themes and deliver clear interpretation—avoiding “waffling,” where descriptions replace conclusions.
Writing is guided by a blueprint with a fixed order: introduction (importance, background, gap), methodology (sample/search details, inclusion/exclusion, analysis approach), results/discussion (organized synthesis with interpretation), and conclusion (main findings, practical implications, limitations, and future research). The training also stresses planning and deadline setting, working backward from submission dates, and using ready-made language resources to keep writing concise and argument-driven.
Cornell Notes
A systematic literature review is worth pursuing only when three conditions align: a large existing evidence base (hundreds of studies), unresolved or conflicting findings, and no recent comprehensive review. From that research gap, researchers derive clear research questions, generate targeted keywords (including synonyms and wildcards), and apply explicit inclusion/exclusion criteria to keep the search manageable and valid. The search must be replicable: select a few databases, use structured search strings, record hit counts before and after filters, and screen titles/abstracts—ideally with a second reviewer to reduce bias. Data extraction then organizes findings into themes, and the final synthesis must interpret patterns rather than list study-by-study results. The blueprint for writing follows a fixed structure: introduction, methodology, results/discussion, and conclusion.
What three conditions make a topic suitable for a systematic review rather than a general literature survey?
How do scoping reviews, systematic reviews, and meta-analyses differ in purpose and output?
Why are inclusion and exclusion criteria treated as a core step rather than an afterthought?
What does “replicable search” mean in practice during the database search phase?
How should the final synthesis be written to avoid “waffling”?
What is the fixed blueprint order for writing the review paper sections?
Review Questions
- What specific evidence signals a “research gap” and how does that gap translate into research questions?
- Describe the sequence from keyword generation to screening to data extraction, including what must be recorded for methodological transparency.
- How does a meta-analysis’s evidence selection differ from a systematic review’s, and why does that matter for effect sizes?
Key Points
- 1
Only pursue a review topic when there is a large evidence base, unresolved/conflicting findings, and no recent comprehensive review covering the same scope.
- 2
Convert the research gap into precise research questions before building the search strategy.
- 3
Generate keywords systematically using synonyms, wildcards, and iterative narrowing to about five to ten high-value terms.
- 4
Use explicit inclusion/exclusion criteria to control search volume and ensure the evidence matches the review’s purpose (e.g., time window, population, methodology, control/placebo requirements).
- 5
Run searches across a small set of databases (typically three to five) using structured connectors and document hit counts before and after filters for replicability.
- 6
Screen titles and abstracts to reach the final study set, ideally using two reviewers to reduce bias.
- 7
Write the synthesis as interpretation, not a catalogue of findings—organize by themes and maintain a clear “pyramid apex” conclusion.