Get AI summaries of any video or article — Sign up free
Writing the Best Paper (Vol 5) - How to Write a Survey Paper? Research Tutorials with Dr. Sourish thumbnail

Writing the Best Paper (Vol 5) - How to Write a Survey Paper? Research Tutorials with Dr. Sourish

6 min read

Based on Enago Read (Previously Raxter.io)'s video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Choose one primary survey focus from the defined categories and keep it consistent throughout the paper to avoid incoherence.

Briefing

Survey papers succeed or fail on focus and comparison, not on how many papers get listed. The core message is that a strong survey paper picks one clear focus (from a defined set), stays contemporary within that focus, and builds a structured, multidimensional comparative analysis that clearly shows how studies reinforce, extend, or contradict each other—while ending with gaps and future directions. That matters because survey papers often become the entry point researchers use to find relevant work, which can translate into higher visibility and citations.

The session breaks survey papers into three primary types—then adds narrower variants. One broad category surveys the contemporary landscape: it maps recent research questions around a topic (e.g., within NLP, questions like sentiment analysis, document similarity, language modeling, or event extraction). A second broad category organizes by approach: it collects the different methods or paradigms used to address a specific research question (e.g., sentiment analysis on Twitter using different modeling paradigms). A third broad category organizes by evaluation method: it compares how the same research question has been tested, emphasizing the diversity of evaluation designs and setups. Beyond these, the talk describes narrower survey styles that focus on (1) variants of a research question across settings (Twitter vs. Facebook vs. Amazon reviews), (2) variants of an approach (e.g., different deep learning architectures for sentiment analysis), or (3) variants of an evaluation technique while keeping the evaluation style consistent.

A key warning follows: weak survey papers typically suffer from mixed focus, lack of contemporary coverage, and weak or missing critical comparison. The most common failure is “touching on everything,” where the writing becomes a reference dump rather than a coherent synthesis. Another major weakness is insufficient multidimensional structuring—meaning the survey doesn’t lay out a blueprint of the dimensions that define the chosen focus. For example, if the focus is research questions, the structure should include variations in formulation, assumptions, and scope (what is inside vs. outside the study boundary). If the focus is research approach or evaluation, the structure should include variations in originality, method details, evaluation setup (controlled environment, parameters, qualitative vs. quantitative procedures), and resulting findings.

Comparison itself must be explicit and categorized. The session frames comparative study as three types: reinforcement (supporting a prior claim), augmentation (extending it—either by adding complementary contributions or by extracting new insights from the same evidence), and contradiction (ranging from unfavorable judgment to evidence-backed contradiction, and finally to proposing an alternative that is argued to be better). The talk emphasizes that these comparison types should be crystal clear in the writing, not implied.

Practical “good survey” habits are then laid out: state the primary focus in the introduction; make each focus dimension its own section; ensure a substantial portion of the cited work appears across those sections (suggested 35–40%); include critical opinions during comparisons; and use summary tables to keep the synthesis readable. If the focus is approach or evaluation, the tables should capture study samples/data sets, control or evaluation environments, parameters, and outcomes. The conclusion should enumerate gaps and future directions tied directly to the chosen focus.

Finally, the session connects these ideas to Raxter (Raxter.io), describing how it can help manage the exponential growth of reading lists by filtering related papers based on selected sections (problem statement, approach, or key abstract ideas), attaching papers to specific survey sections, comparing sections from a collection, and using “key insights” to extract the most relevant dimensions of a paper quickly when time is limited.

Cornell Notes

A strong survey paper is built around a single, clearly chosen focus and a structured comparative analysis of contemporary work. The session classifies survey papers into three primary types—landscape-by-research-questions, landscape-by-approach, and landscape-by-evaluation—plus narrower variants that track question variants, approach variants, or evaluation variants. Weak surveys typically mix multiple focuses, omit recent work, and fail to do balanced critical comparison. Comparison must be explicit: studies can reinforce, augment (with complementary contributions or new inferences), or contradict (with evidence or even an alternative argued to be better). The payoff is practical: well-structured surveys become a research “hub,” helping readers find gaps and often improving citation impact.

What are the three main types of survey papers, and how do they differ in what they organize?

The talk describes three primary survey types. (1) Contemporary landscape surveys organize by research questions: they map recent problems and sub-questions within a topic (e.g., in NLP, sentiment analysis, document similarity, language modeling, event extraction). (2) Approach-focused surveys organize by methods/paradigms for a specific research question (e.g., different modeling approaches used for sentiment analysis on Twitter). (3) Evaluation-focused surveys organize by evaluation methods: they compare how the same research question has been tested, emphasizing differences in evaluation designs and setups.

Why does “focus” determine whether a survey is good, and what does a weak survey look like?

A survey needs one primary focus chosen from the defined categories; mixing multiple focuses creates incoherence and turns the paper into a reference dump. Weak surveys also fail by lacking contemporary work (not staying current within the chosen focus) and by missing balanced critical comparison—writing a few lines per paper without synthesizing how studies relate. The talk stresses that simply listing many papers does not make a survey; the synthesis must be comparative and critical.

What does multidimensional structuring mean in practice for a survey paper?

Multidimensional structuring means building a blueprint of dimensions that define the chosen focus, then turning those dimensions into sections. For research-question-focused surveys, dimensions include variations in formulation, assumptions, and scope (what is included vs. explicitly outside the study boundary). For approach/evaluation-focused surveys, dimensions include variations in originality, significant method details, evaluation setup (controlled environment, qualitative vs. quantitative procedures), and differences in findings/results. Each dimension should be covered systematically rather than scattered.

How should comparisons between papers be categorized in a survey?

Comparisons should be explicitly labeled as reinforcement, augmentation, or contradiction. Reinforcement means one paper supports another by repeating or agreeing with its claims; augmentation goes further by adding complementary contributions or by extracting new insights/inferences from the same evidence; contradiction ranges from unfavorable opinions to evidence-backed contradiction and finally to proposing an alternative idea argued to be superior. The talk warns against vague comparison—writers must state which comparison type is happening and what the added value or conflict is.

What structural and writing practices improve survey quality and readability?

The session recommends stating the primary focus in the introduction and keeping every section tied to that focus. It suggests ensuring a substantial share of the covered work appears across the sections (about 35–40% as a guideline), adding critical opinions during comparisons, and using summary tables. For approach/evaluation surveys, tables should include study sample/data sets, control/evaluation environment, parameters, and findings/results. The conclusion should enumerate gaps and future directions grounded in the focus.

How can Raxter help with the practical problem of an exploding reading list?

Raxter is presented as a literature analysis tool that helps filter and organize papers by selected aspects. Users can start from a key paper and then retrieve closely related papers based on a chosen selector (problem statement/research question, similar approach, or key abstract ideas). Papers can be attached to specific survey sections like sticky notes, compared from a collection, and summarized via “key insights,” which dissects papers into dimensions (e.g., research goal, originality, methodology, evaluation setup) quickly so writers can read only what matters.

Review Questions

  1. What are the six focus categories implied by the talk (three primary plus three narrower variants), and which one would you choose for a survey you plan to write?
  2. Give an example of reinforcement vs. augmentation vs. contradiction between two hypothetical papers, and explain what evidence would make each comparison type clear.
  3. How would you design a multidimensional outline for a survey focused on research questions versus one focused on evaluation methods?

Key Points

  1. 1

    Choose one primary survey focus from the defined categories and keep it consistent throughout the paper to avoid incoherence.

  2. 2

    Stay contemporary within the chosen focus; a survey’s value depends on recent work, not just a large bibliography.

  3. 3

    Build a multidimensional blueprint for the focus (e.g., research-question surveys: formulation, assumptions, scope) and turn each dimension into a dedicated section.

  4. 4

    Do balanced critical comparison rather than summarizing abstracts; comparisons must be explicit and categorized.

  5. 5

    Label comparisons as reinforcement, augmentation, or contradiction, and specify what new contribution or evidence drives the relationship.

  6. 6

    Use summary tables to make synthesis scannable—especially tables that include data sets/study samples, evaluation setups, parameters, and outcomes.

  7. 7

    End with future directions tied directly to identified gaps in the chosen focus, not generic recommendations.

Highlights

A survey paper isn’t a reference dump; it’s a structured, critical comparison anchored to one clear focus.
Multidimensional structuring turns abstract “themes” into concrete sections—like variations in formulation, assumptions, and scope for research-question surveys.
Comparisons must be categorized as reinforcement, augmentation, or contradiction, with the added value or evidence made explicit.
Summary tables and a focus-consistent outline are presented as the practical tools that make survey writing readable and rigorous.
Raxter is positioned as a way to control the exponential growth of reading lists by filtering papers based on selected aspects and extracting “key insights” by dimension.

Topics

  • Survey Paper Writing
  • Survey Paper Types
  • Comparative Study
  • Multidimensional Structuring
  • Literature Analysis Tool

Mentioned