Get AI summaries of any video or article — Sign up free
Academia's dirty little secret | The eye-opening truth about PhD research thumbnail

Academia's dirty little secret | The eye-opening truth about PhD research

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The h-index reduces a researcher’s career to one number: the count of papers with at least that many citations.

Briefing

Academic careers often hinge less on the quality of the science and more on a handful of citation metrics—especially the h-index. The h-index condenses a researcher’s output into one number: the count of papers that have received at least that many citations. Because hiring, promotions, and funding decisions frequently rely on such numbers, researchers learn to optimize for them, even when the metric was never designed to judge career success.

The h-index is widely tracked across databases such as Google Scholar, Scopus, and Web of Science, and different platforms can produce different values. Google Scholar, in particular, tends to show higher and faster-moving citation counts, which can make it especially influential in practice. Once a single number becomes the yardstick, incentives shift. Researchers who are “clever” will find ways to raise the metric—through tactics that range from ethically gray to outright manipulation.

One common approach is to secure authorship on papers with minimal contribution. Large, high-prestige journals like Nature can have author lists that look “insane,” and the payoff is straightforward: papers in top journals typically attract more citations, and the prestige itself can help careers. The transcript describes a scenario where a PhD student asked a supervisor whether they should be added to the author list despite doing no experiments. The supervisor initially agreed—because authorship could boost the h-index—then later chose to be listed in acknowledgements instead, highlighting how easily credit can be traded for metric gains. The same dynamic can cut both ways: authors may be pressured to include additional names, even when their actual work is limited.

Another tactic is self-citation. By citing one’s own earlier papers in new publications, researchers can increase citation counts and thereby raise the h-index. Review processes can also become a channel for gaming: reviewers may demand that authors cite the reviewer’s own work, inflating citations while dressing it up as “relevance.” The transcript also points to a broader literature-quality problem: pressure to publish quickly can encourage “drip feeding” small, sometimes hard-to-reproduce findings, producing low-quality work that then accumulates citations.

Beyond citations, money plays a major role. Universities often reward academics who bring in large grants—described as millions of dollars and “Category A” grants in Australia—because grant income can fund positions, labs, and institutional priorities. That creates a compounding advantage for senior researchers with established networks and proven track records. Early-career researchers, including new PhD graduates and postdocs, may be seen as higher risk: they lack the history of securing major funding, even if their science is strong. In practice, the transcript argues, hiring can favor the oldest candidates with the best grant record, leaving younger researchers with fewer opportunities.

Taken together, the core “dirty secret” is that academic success is frequently driven by metric optimization—especially the h-index—plus the ability to attract money, rather than a direct measure of scientific impact. Reducing a career to a single number makes it easier to compare people, but it also invites gaming and can distort incentives away from the best science.

Cornell Notes

The transcript argues that academic advancement often depends more on measurable indicators than on scientific merit, with the h-index at the center. The h-index counts how many papers have at least that many citations, and it is used across citation databases like Google Scholar, Scopus, and Web of Science. Because careers are tied to a single number, researchers may game the system through tactics such as adding low-contribution authorship, increasing self-citations, or using peer review to require citations to one’s own work. Funding incentives reinforce the same pattern: universities may prioritize candidates who reliably bring in large grants, which disadvantages early-career researchers. The result is a system where incentives can reward strategy and grant success more than reproducible, high-quality science.

What exactly is the h-index, and why does it become so influential in academia?

The h-index is defined as the number of papers that have received at least that same number of citations. For example, an h-index of 12 means 12 papers have 12 or more citations. Because hiring and promotion decisions often rely on citation metrics, the h-index becomes a proxy for “relevance” and career success—even though it was not originally meant to judge careers. Once that proxy is used, researchers have strong incentives to increase the number rather than only improve the science.

How can authorship be used to raise citation metrics without doing much direct work?

One described tactic is requesting or accepting placement on author lists after only minimal contribution—such as reviewing, providing small feedback, or contributing to a single figure. Prestigious journals (e.g., Nature) often have very large author lists, and papers in high-impact venues tend to attract more citations. The transcript includes an example where a PhD student wanted to add a supervisor to the author list to boost the supervisor’s h-index, even though the supervisor did not run experiments; the supervisor instead suggested being listed in acknowledgements.

What does “self-citation” do to the h-index, and why is it controversial?

Self-citation increases the citation counts of a researcher’s earlier papers by citing them in new work. Since the h-index depends on how many papers reach citation thresholds, boosting citations to one’s own prior papers can raise the h-index. The transcript notes that self-citation can be legitimate when it is genuinely relevant, but it becomes gaming when citations are tenuous or excessive.

How can peer review be turned into a citation-gathering mechanism?

During peer review, reviewers can request that authors cite specific papers. The transcript describes cases where reviewers provide long lists of their own publications and ask authors to include them, which can raise the reviewer’s citations and therefore their h-index. Even when framed as improving the manuscript, this can shift peer review from quality control toward metric advancement.

Why does grant money matter alongside the h-index?

Universities often favor academics who bring in large grants because grant income funds positions, labs, and institutional goals. The transcript emphasizes that senior researchers may have an advantage because they have established networks and a track record of securing major funding (including “Category A” grants in Australia). Early-career researchers may be viewed as higher risk, even if their research is promising, because they have not yet demonstrated the ability to attract millions in funding.

What publication behavior can emerge when speed and metrics pressure researchers?

The transcript links metric pressure to faster, smaller publications—described as “drip feeding” tiny papers rather than waiting for a full story. It also suggests this can contribute to low-quality or hard-to-reproduce findings entering the literature, which then accumulate citations and distort what the system rewards.

Review Questions

  1. How does the h-index calculation create incentives to prioritize citation counts over scientific quality?
  2. Which specific behaviors mentioned in the transcript can artificially increase citation metrics (authorship, self-citation, reviewer citation demands), and what ethical concerns do they raise?
  3. How do grant funding incentives compound the advantages of senior researchers compared with early-career researchers?

Key Points

  1. 1

    The h-index reduces a researcher’s career to one number: the count of papers with at least that many citations.

  2. 2

    Citation metrics are tracked across databases such as Google Scholar, Scopus, and Web of Science, and platform differences can affect perceived impact.

  3. 3

    Authorship can be used strategically—adding names with limited contribution can still boost h-index through higher-citation papers and journal prestige.

  4. 4

    Self-citation can raise citation counts and therefore the h-index, even when some citations are only loosely relevant.

  5. 5

    Peer review can be exploited when reviewers demand citations to their own work, turning quality assessment into metric advancement.

  6. 6

    Pressure to publish quickly can encourage incremental, hard-to-reproduce findings that still gain citations.

  7. 7

    Large grant success can outweigh scientific merit in hiring decisions, disadvantaging early-career researchers who lack a proven funding track record.

Highlights

The h-index turns academic success into a single citation threshold number, making it easy to compare researchers—and easy to game.
Prestige journals like Nature can amplify metric gains because high-impact venues tend to generate more citations, encouraging strategic authorship.
Grant money and citation metrics reinforce each other: universities may hire candidates who reliably bring in major funding, not necessarily those with the strongest science.
Peer review can become a citation pipeline when reviewers request references to their own publications.

Topics

Mentioned