Get AI summaries of any video or article — Sign up free
How To Get New Ideas from Research Papers - Part 1 | Research Tutorials with Dr. Sourish thumbnail

How To Get New Ideas from Research Papers - Part 1 | Research Tutorials with Dr. Sourish

5 min read

Based on Enago Read (Previously Raxter.io)'s video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start gap hunting at the problem-definition section, not after results, to avoid solution bias.

Briefing

Research ideas don’t come from reading more papers—they come from reading the right parts of papers with a gap-hunting method. With researchers often able to read only about one paper per week, the backlog becomes overwhelming, and the real risk is losing time and missing opportunities hidden in problem statements and assumptions. The core finding here is a practical workflow for literature surveys that turns each paper into a structured hunt for (1) what problem the authors are really trying to solve, (2) what assumptions quietly limit that problem, and (3) what “points to ponder” can become the seed of a new contribution.

The method starts before results, methods, or conclusions. For each selected paper, the reader is instructed to jump straight to the section where the problem is defined and described in detail—then resist reading the rest. Instead, the reader highlights that problem-definition text and immediately drafts a “canvas” of all the sub-problems or dimensions that would need to be solved to fully address the research problem end-to-end. After that initial list, the reader compares it to what the paper itself identifies: which sub-problems match, which the paper omits, and which the paper includes but the reader missed. The gaps that emerge aren’t automatically “novelty.” Each missing or extra dimension must be judged for importance—ideally through discussion with labmates, teammates, or a supervisor—because some omissions don’t matter, while some “new” items may be irrelevant to the larger goal.

A second gap engine targets assumptions. The workflow asks readers to list every assumption the authors rely on, then test when those assumptions hold and when they fail. If an assumption rarely holds, it becomes a potential fault line for a rebuttal or a new direction. The reader then asks whether removing one or more assumptions changes the problem’s nature—often turning it into a new, more tractable, or more meaningful research question. The transcript emphasizes that assumptions can be fuzzy or crisply formalized; making vague assumptions concrete can reveal entirely new dimensions of the problem.

To manage this work across many papers, the discussion highlights features in Raxter.io (referred to as “racks” in the transcript), including “key insights” to collate research goals from a paper, working notes to store gap lists (e.g., “gaps in research problem formulation”), and discussion threads or critique templates for structured questioning. The platform also supports “compare from your collection,” letting readers link the current section to earlier papers in their own library by argument or assumption similarity—helping connect present gaps to past literature.

Finally, the workflow adds a timeline discipline: when working through a set of papers, read them from oldest to newest. That ordering makes it easier to see how problem formulations and assumptions evolve over time, and how communities “upgrade” or twist research questions—turning literature review into a map of maturation rather than a pile of summaries. The payoff is a repeatable loop: for each paper, extract problem dimensions and assumptions, validate which gaps matter, and then summarize the opportunity gaps that remain after the comparison.

Cornell Notes

The transcript lays out a gap-identification workflow for literature surveys that focuses on problem statements and assumptions rather than results. For each paper, the reader first highlights the section defining the problem, then drafts a full “canvas” of sub-problems needed to solve the issue end-to-end. Next, the reader compares that canvas to what the authors actually identify, marking both missing and extra dimensions and judging whether the differences matter for the larger research goal. A second pass lists the paper’s assumptions, tests when they hold or fail, and considers whether removing them yields a new version of the research question. Tools in Raxter.io help store working notes, collate research goals (“key insights”), and connect related sections across previously read papers (“compare from your collection”).

Why does the workflow start by reading only the problem-definition section and not the results or methods?

It’s designed to prevent “solution bias.” By jumping to the problem statement first, the reader can build an end-to-end canvas of what must be solved, then compare that canvas to the paper’s own framing. That comparison is where gaps appear: which sub-problems the reader identified but the authors didn’t, and which the authors include that the reader missed. Reading results and methods too early can narrow attention to what the paper already chose to study, making it harder to notice missing dimensions.

How does the “canvas” comparison generate actionable research gaps?

The reader drafts a list of all aspects/sub-problems required to solve the full research problem. Then the reader checks how many of those aspects match the authors’ identified problem dimensions. The gaps split into two categories: (1) aspects the reader found but the authors didn’t, and (2) aspects the authors found but the reader didn’t. The transcript stresses that both categories require judgment—some differences may be unimportant, and some “new” items may not actually advance the larger goal.

What makes assumptions a reliable source of new research directions?

Assumptions define scope and set boundaries for the research question. The workflow asks readers to list assumptions, evaluate when they hold versus when they fail, and treat rare or fragile assumptions as potential fault lines. It also asks whether removing assumptions changes the problem’s intrinsic nature—often producing a derived version of the problem that can become a new research opportunity.

What does it mean to “make fuzzy assumptions concrete,” and why does that matter?

In STEM papers, assumptions may be crisply formalized mathematically; in humanities/social science, they may be expressed through discipline-specific framing. If assumptions are vague, the workflow suggests bulleting them out and attempting to specify them more precisely. That act can “change the color” of the problem—revealing new dimensions and turning vague limitations into testable or discussable research constraints.

How can readers connect their current gaps to earlier literature without losing context?

The transcript describes using Raxter.io’s “compare from your collection.” After selecting a section (e.g., a research goal or an assumption), the tool surfaces earlier papers in the user’s library that are related argumentatively to that section. Readers can then pin similarities/differences and attach “gaps identified” notes, creating a bridge between present work and prior reading done months or years earlier.

Why read papers from oldest to newest when hunting gaps?

Reading in ascending publication order helps reveal evolution: how problem formulations and assumptions get revised over time. The transcript frames this as a timeline view—like a Facebook timeline—showing how communities mature around a research problem and how new problem derivations emerge. That historical perspective can clarify what’s genuinely changing versus what’s merely being rephrased.

Review Questions

  1. When building a gap list from a paper’s problem statement, what checks determine whether a “missing” dimension is actually worth pursuing?
  2. How would you evaluate an assumption that holds only under narrow conditions—what research move does the workflow suggest?
  3. What specific benefits come from using “compare from your collection” when trying to connect current ideas to earlier papers?

Key Points

  1. 1

    Start gap hunting at the problem-definition section, not after results, to avoid solution bias.

  2. 2

    Draft an end-to-end “canvas” of sub-problems for each paper’s research problem, then compare it to the authors’ stated dimensions.

  3. 3

    Treat gaps as hypotheses: validate whether missing/extra dimensions matter for the larger research goal through discussion with peers or supervisors.

  4. 4

    List assumptions explicitly, test when they hold or fail, and consider whether removing them creates a new derived research question.

  5. 5

    Make fuzzy assumptions concrete by rewriting them in clearer, more formal terms (or clearer disciplinary framing).

  6. 6

    Use Raxter.io working notes to store gap lists and “key insights” to collate research goals across a paper.

  7. 7

    Read multiple papers in ascending publication order to track how problem formulations and assumptions evolve over time.

Highlights

The workflow’s first move is counterintuitive: highlight the problem statement, then don’t read it—build a canvas of what must be solved and compare it to the authors’ framing.
Assumptions are treated as gap generators: fragile assumptions (rarely true) can justify rebuttals or entirely new research directions.
Raxter.io’s “compare from your collection” is positioned as a way to connect today’s gaps to earlier papers by argument/assumption similarity.
Reading papers from oldest to newest turns literature review into a timeline of how research questions and assumptions mature.

Topics

Mentioned