Get AI summaries of any video or article — Sign up free
How to read a research paper | search for and read papers with me | phd student advice thumbnail

How to read a research paper | search for and read papers with me | phd student advice

Ciara Feely·
5 min read

Based on Ciara Feely's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use Google Scholar citation tracing: start from a known useful paper, then scan the “cited by” list for adjacent work.

Briefing

A practical workflow for finding and triaging research papers—then reading only what’s necessary—can save PhD students weeks of time without sacrificing understanding. The process starts with Google Scholar and a citation-tracing loop: begin from a paper already considered useful, then follow who cites it to surface newer or adjacent work. Titles get the first pass, where relevance is judged quickly (for example, a barefoot-running paper may be set aside for marathon-focused injury prevention, while a paper about body mass and running injuries is kept). This stage is also where tools matter: using the Mendeley extension for Chrome lets papers be imported directly into Mendeley via a web importer, avoiding manual downloads and clutter.

After titles, the next filter is the abstract. Instead of defaulting to “read everything,” the workflow treats abstracts as a decision gate—whether to read the full paper, skim selectively, or discard. The abstract review also supports targeted follow-ups. One example involves a study examining beliefs about running shoe and insole influence on running-related injuries, including how running shoe salespersons and physiotherapy students differ in what they think causes injuries and which shoe attributes they recommend. The key takeaway from the abstract is not just the findings (such as salespersons reporting greater confidence than students, and differences in beliefs about training errors and shoe price), but also the opportunity to trace claims back to their sources—especially if marketing or other influences are suggested as underlying causes.

Once a paper earns full attention, the reading strategy becomes structured and staged. The workflow begins with the abstract to confirm fit, then checks the conclusion for the main results and the direction of future work. For a marathon-focused machine learning paper—“Running with cases: a case-based reasoning approach to running your best marathon”—the abstract highlights a novel application: case-based reasoning to predict personal best finish times and recommend pacing plans. The conclusion signals whether the work is worth deeper investment.

Next comes the introduction for motivation and background: why predicted finish time and pacing matter to runners, what affects pacing, and what pacing strategies exist. After that, the reader moves to figures, plots, and figure captions, using them to understand the system’s behavior before diving into the full method. This approach emphasizes comprehension over exhaustive reading: if the graphs and algorithm descriptions provide enough clarity, the full methods section can wait.

Results and discussion sections come after the visual and algorithmic scan, with attention to evaluation outcomes and limitations. In the marathon case-based reasoning example, the reader notes how prediction error varies by sex and by model type, and how performance degrades for slower runners (around 5–6 hours), including lower profile similarity. Finally, the methods section is reserved for when implementation is the goal—so the paper’s technical details are read only after the research question, approach, and evidence are already understood.

Overall, the method balances speed and rigor: triage through citation tracing and abstracts, then read in layers—conclusion, context, visuals, results—before committing to implementation-level detail.

Cornell Notes

The workflow focuses on two stages: finding relevant papers fast and reading them in layers. Citation tracing starts from a known useful paper on Google Scholar, then uses “cited by” lists to discover newer work, with titles as the first relevance filter. Abstracts act as the second gate, deciding whether to read the full paper or only parts—avoiding the trap of trying to read everything. When a paper is worth full attention, reading proceeds in a deliberate order: abstract → conclusion → introduction → figures/plots and algorithm overview → results/discussion → methods last for implementation. This matters because it reduces wasted time while still building enough understanding to judge fit, extract findings, and later implement methods.

How does citation tracing help a PhD student discover papers that are likely relevant?

Start with a paper already considered useful, then use Google Scholar’s “Cited by” view to list newer articles that reference it. The student then scans titles to quickly judge relevance. For instance, a barefoot-running paper may be set aside for a marathon-focused injury topic, while a paper about body mass index and running-related injuries is kept because it aligns with the injury risk-factor angle.

Why use abstracts as a decision gate instead of reading every paper in full?

Abstracts provide the fastest signal about whether the paper’s findings match the student’s needs. The workflow explicitly rejects the idea that every discovered paper must be read end-to-end. If the abstract suggests only that the results are interesting but the methodology won’t be used, the student can skim selectively rather than invest in the full methods.

What role do tools like Mendeley play in the workflow?

The process uses the Mendeley extension for Chrome to streamline saving papers. When a relevant article appears, the student uses the web importer to send it directly into Mendeley, reducing manual downloading and keeping the library organized for later reading.

What is the staged reading order for a paper that will be read in full?

The reading sequence is: abstract (confirm fit), conclusion (capture main findings and future direction), introduction (background and motivation), figures/plots and figure descriptions plus algorithm overview (understand how the system works), then results and discussion (evaluate performance and limitations), and finally the methods section only when implementation is required.

How does the workflow use figures and evaluation results to spot strengths and weaknesses early?

Before deep methods reading, the student inspects plots and figure captions to understand system behavior and performance trends. In the marathon case-based reasoning example, the reader observes that prediction error differs across models and that slower runners (about 5–6 hours) show higher error and lower profile similarity—an early signal that the approach needs improvement for that group.

Review Questions

  1. When triaging papers, what two filters come first, and what decisions do they support?
  2. In what order does the workflow read the main sections of a paper, and why is the methods section saved for later?
  3. Give an example of how figures/plots can reveal a limitation without reading the full methodology.

Key Points

  1. 1

    Use Google Scholar citation tracing: start from a known useful paper, then scan the “cited by” list for adjacent work.

  2. 2

    Apply a two-step triage: titles for quick relevance, abstracts for deciding whether to read fully, skim, or discard.

  3. 3

    Avoid reading everything by default; match reading depth to whether the methodology will be used.

  4. 4

    Use Mendeley (via the Chrome extension) to import papers directly and keep the research library organized.

  5. 5

    When committing to a full read, follow a layered order: abstract → conclusion → introduction → figures/plots/algorithm overview → results/discussion → methods last.

  6. 6

    Let evaluation evidence guide early judgments about strengths and limitations, such as how error and similarity metrics change across runner groups.

Highlights

Citation tracing turns one useful paper into a pipeline of new candidates by following who cites it.
Abstracts function as a gate—reading full papers only when the question and evidence match personal research needs.
Figures and plot captions can provide enough understanding to delay the methods section until implementation is necessary.
In the marathon case-based reasoning example, performance drops for slower runners, visible through higher prediction error and lower profile similarity.
The workflow is deliberately staged to reduce wasted time while still building a credible understanding of the research approach.

Topics

  • Literature Review
  • Citation Tracing
  • Abstract Screening
  • Paper Reading Strategy
  • Machine Learning for Sports

Mentioned