Get AI summaries of any video or article — Sign up free
What they don't tell you about academic publishing | 5 SECRETS thumbnail

What they don't tell you about academic publishing | 5 SECRETS

Andy Stapleton·
6 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Predatory journals exploit publish-or-perish pressure by charging fees while delivering weak peer review, and they can damage careers when journal reputations are scrutinized.

Briefing

Academic publishing runs on incentives that reward speed, prestige, and money—so researchers who understand the system can avoid predatory traps and navigate the politics behind “peer review.” The core warning is blunt: because journals are built like businesses, clever people will try to game the process, and the result can distort careers, citation records, and what ultimately gets treated as credible scientific knowledge.

A first priority is recognizing that not all journals are equal, largely because journal quality is tied to measurable prestige signals like impact factor—and those signals vary widely by field. In materials science and nanotechnology, an impact factor above roughly three can be seen as a solid target, while in physics impact factors can be closer to one or lower. Still, pressure to publish pushes many researchers toward the path of least resistance, where predatory journals can appear. These outlets prioritize fees over rigorous peer review: they take money to publish, provide weak or superficial review, and then move on. Publishing in such journals can damage a career later when hiring committees scan citation lists and journal reputations, often expecting “top quarter” or top-tier venues.

A practical safeguard is using tools designed to identify legitimate journals, including Think. Check. Submit. org, which guides researchers on how to vet publication venues and avoid reputational harm.

The second major factor is that reputation—especially “big names”—can function like academic currency. Influential researchers build celebrity status through early, widely cited work, making other scientists eager to collaborate. Editors also recognize those names and may fast-track submissions because the perceived audience demand is high. This can translate into high publication volume and access to top journals such as Nature, Science, and PNAS, even when the sheer output would be hard to imagine without a strong editorial and collaborative network.

Third comes author order, governed by an unspoken hierarchy. First author typically signals the person who did most of the work and wrote the bulk of the paper. Last author often corresponds to the supervisor or funding source, sometimes with limited direct contribution beyond providing grants and lab access; the last author position frequently aligns with the corresponding author, who coordinates author responsibilities and communication. The middle positions are where disputes and ego-driven bargaining happen, and the least forceful contributors can end up in the center—an outcome that can quietly affect how others interpret contribution.

Fourth, citation metrics such as the h-index can be inflated through self-citation. Referencing prior work is normal when there’s a reasonable connection, but citation “hacking” becomes problematic when references are forced into a manuscript to satisfy reviewers or to meet metric expectations. A common tactic described is adding extra citations to papers from the author’s research group—sometimes dozens—after peer review flags missing references.

Fifth, the process is political, not purely objective. Editors act as gatekeepers at the initial decision stage, and relationships—sometimes via supervisors with institutional ties—can influence whether a manuscript is sent to peer review. Even when peer review later catches quality issues, the first interaction can be a lottery, especially for early-career researchers who lack time to resubmit widely. The result is a system where publishing success can depend as much on networks and strategy as on scientific merit.

Cornell Notes

Academic publishing is shaped by incentives that can be gamed: journal prestige, money, and relationships often matter as much as raw research quality. Predatory journals exploit publish-or-perish pressure by charging fees while offering poor peer review, and they can harm careers when hiring committees scrutinize journal reputation. Big-name researchers gain an advantage through celebrity status and collaboration networks, which can make editors more willing to send papers forward. Author order follows an informal hierarchy (first and last positions carry the most weight), while citation metrics like the h-index can be inflated through self-citation and citation padding. Finally, early editorial decisions can be political—relationships may help manuscripts reach peer review, creating a high-stakes, sometimes lottery-like process.

How can researchers tell whether a journal is likely legitimate, and why does impact factor vary by field?

Impact factor is used as a prestige proxy, but what counts as “good” depends on the discipline. In materials science and nanotechnology, an impact factor above about 3 is described as a happy threshold, while in physics impact factors can be around 1 or lower and still be meaningful. Because pressure to publish can push people toward easy acceptance, predatory journals can “jump out of the woodwork” and charge publication fees while doing poor peer review. A concrete vetting resource mentioned is Think. Check. Submit. org, which helps researchers steer clear of journals that could harm their reputation later.

Why do predatory journals pose a career risk beyond publishing a low-quality paper?

The risk extends into hiring and evaluation. When applying for jobs or postdocs, committees often look at citation lists and the reputations of the journals where work appeared. Predatory venues can therefore reduce credibility even if the research itself was not fraudulent—because the publication outlet signals low editorial standards. The described business model is: scientists do the work and writing, peer review is performed for free (but predatory journals provide poor review), editors may be unpaid and motivated by CV value, and readers are charged high access fees; open-access variants can charge thousands in publication fees, creating money for “sharks” to exploit.

What advantage does “celebrity status” provide in academic publishing?

Influential researchers who publish early, highly cited work can become collaboration magnets. Other scientists want to work with them, and editors may recognize their names and treat submissions as likely to attract readers. The result is a feedback loop: reputation increases collaboration and editorial willingness, which can help papers reach high-prestige journals such as Nature, Science, and PNAS. The transcript also notes that extremely high publication volume can be explained by editorial triage and strong networks rather than one-person oversight of every detail.

How does author order function as an informal signal of contribution?

First author is treated as the person who did most of the work and likely wrote the majority of the paper. Last author is typically the supervisor or funding source, sometimes contributing mainly through grants and lab access; it often correlates with the corresponding author role, which involves coordinating author contributions and communication. Middle author positions are where bargaining and ego conflicts show up most, and being placed in the center can imply a smaller or less forcefully claimed contribution.

What are the main ways citation metrics can be manipulated, and why can it be hard to resist?

Citation metrics such as the h-index can be inflated through self-citation—referencing one’s own earlier papers. Self-citation isn’t automatically wrong when the cited work is genuinely related, but it becomes questionable when references are stretched. A more serious tactic described is citation padding driven by peer review: reviewers may demand specific citations, and authors may add many extra references (up to around 20 in one example) to papers from their own group to avoid rejection. The moral risk is framed as low relative to the reward of getting the paper accepted quickly.

What does “political” gatekeeping look like at the journal stage?

Editors serve as gatekeepers when manuscripts are first scanned and either rejected or sent to peer review. Relationships can influence this early decision: a supervisor with ties to an editor can push for reconsideration via a carefully worded email, and the manuscript may then be sent to peer review. The transcript describes an example where an editor, after reconsideration, forwarded the paper so other reviewers could assess it—suggesting that some papers may never reach peer review if the editor’s initial judgment is unfavorable or simply inconsistent. Early-career researchers face higher pressure to publish quickly, reducing their ability to resubmit widely and absorb rejections.

Review Questions

  1. Which parts of author order (first, middle, last) carry the strongest informal signals of contribution, and how do those signals affect career interpretation?
  2. What practical steps can researchers take to reduce the risk of publishing in predatory journals, and why might impact factor targets differ across disciplines?
  3. How can citation practices inflate metrics like the h-index, and what peer-review dynamics make citation padding more likely?

Key Points

  1. 1

    Predatory journals exploit publish-or-perish pressure by charging fees while delivering weak peer review, and they can damage careers when journal reputations are scrutinized.

  2. 2

    Impact factor is field-specific; what counts as a strong target in one discipline may be low in another, so comparisons must be contextual.

  3. 3

    Reputation and “celebrity status” can create an editorial and collaboration advantage, helping papers reach top-tier journals.

  4. 4

    Author order functions as an informal contribution ranking: first author signals major work, last author often signals supervision/funding, and middle positions are where disputes often occur.

  5. 5

    Citation metrics such as the h-index can be inflated through self-citation and through peer-review-driven citation padding.

  6. 6

    Early editorial decisions can be influenced by relationships, meaning access to peer review can depend on networks as well as research quality.

  7. 7

    Vetting tools like Think. Check. Submit. org can help researchers identify legitimate journals and avoid reputational harm.

Highlights

Predatory journals are described as a fee-for-publication model that can bypass meaningful peer review—turning journal choice into a career-risk decision.
Impact factor targets differ sharply by field; materials science thresholds around 3 are contrasted with physics where impact factors around 1 can still be acceptable.
Author order carries an unspoken hierarchy: first and last positions are treated as the most consequential, while middle positions can reflect bargaining and perceived contribution.
Citation “hacking” is framed as most dangerous when peer review demands specific references, prompting authors to add many extra citations to get accepted.
Gatekeeping is portrayed as political at the initial editor scan stage, where relationships can determine whether a paper reaches peer review at all.

Topics

Mentioned