Get AI summaries of any video or article — Sign up free
Associate Professor’s Best Kept Secrets to Publishing Papers thumbnail

Associate Professor’s Best Kept Secrets to Publishing Papers

Academic English Now·
6 min read

Based on Academic English Now's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Citations can rise quickly when a paper is published early on a widely searched topic and becomes a default reference for later work.

Briefing

Getting a paper heavily cited isn’t just about picking a “good journal” or writing solid research—it’s about timing, topic selection, and packaging the work so other scholars can quickly build on it. Diane Kendra’s most-cited example centers on a 2023 paper on “the role of chat GPT in higher education benefits challenges and future research directions,” which reached roughly 270 citations within a year and landed in the top 1% of most cited papers. The citation surge is treated as the outcome of several coordinated choices: a fast-moving, high-interest topic; a theoretical anchor that matched existing literature; and a publication strategy designed to get the paper in front of researchers early.

Kendra frames publishing as a chain of decisions rather than a single trick. The basics start with identifying a novel gap and shaping clear research questions, then aligning the manuscript with the journal’s audience. She emphasizes manuscript construction beyond raw facts—turning findings into a coherent “story,” keeping citations current, and being ready for the realities of peer review, proofing, and post-acceptance editing. After publication, promotion still matters: sharing through networks and social media helps ensure the work is seen and cited.

For the citation breakthrough, the key driver was speed to market. The team began writing and researching as chat GPT was launched, aiming to publish quickly while the topic was still “hot.” They also chose a conceptual paper format and targeted an open-access journal to reach readers early. Even though the journal was initially ranked Q2, Kendra says the editor’s strategy for lifting it to Q1 was known through professional networks—and by submission time it had already become Q1. That combination—early publication on a widely searched topic—meant other researchers had a go-to reference point to cite.

The discussion widens from publishing mechanics to academic incentives and integrity. Kendra notes that “publish or perish” pressures can push quantity over quality, encouraging practices like slicing one study into multiple papers and potentially undermining teaching responsibilities. She links the metric-driven system to risks such as mental strain and, in extreme cases, ethical misconduct. On AI integrity, she argues that deliberate cheating has long existed, but generative AI changes the *type* of breach: plagiarism and contract cheating remain issues, while newer patterns include falsification, fabrication, hallucination, and misattribution. Institutions, she says, have responded by ramping up resources for detection and—crucially—education.

To balance generative AI’s benefits with responsible use, Kendra describes an eight-strategy framework across students, educators, and institutions. It centers on empowering integrity through collaborative learning, building critical thinking and AI literacy, using intelligent tutoring and personalized learning to reduce incentives for misconduct, and shifting toward authentic, project-based assessments. Educators need continuous upskilling, while institutions must provide infrastructure, policies, and a culture of academic integrity.

Underlying the whole conversation is a practical mindset: collaboration accelerates output and expands research into multiple follow-on papers, but it requires commitment, clear division of responsibilities, and transparency. Rejection, she adds, is not failure—it often reflects misalignment with a journal’s audience or an iterative review process. Motivation, finally, is less about metrics and more about mentoring and helping others develop their careers and capabilities.

Cornell Notes

Diane Kendra credits her paper’s rapid citation growth to coordinated publishing decisions: choosing a timely, high-interest topic (chat GPT in higher education), anchoring it in a dominant theory (constructivism), writing a conceptual piece, and targeting an open-access journal to reach researchers early. Speed to market mattered because early publication made the work a default reference for others, especially once the journal’s ranking improved to Q1 by submission. Beyond citations, she stresses that publishing success depends on aligning topic and manuscript with the journal’s audience, crafting a coherent narrative, and preparing for iterative peer review and revisions. She also argues that generative AI doesn’t necessarily increase cheating rates overall, but it changes misconduct patterns toward hallucination, misattribution, and fabrication—so education and integrity-focused assessment design are essential.

Why did Kendra’s chat GPT higher-education paper attract unusually fast citation growth?

The team started writing and researching at the early stage of chat GPT’s launch, aiming for speed to market while the topic was “hot.” They targeted a conceptual paper format and paired the trend with a theoretical framework—constructivism—that already had traction in higher-education literature. They also chose an open-access journal to publish early; through professional networks they understood the editor’s plan to lift the journal from Q2 to Q1, and by submission time it had become Q1. That early visibility helped other researchers cite it as a go-to reference point.

What does Kendra say matters most in the mechanics of getting published in reputable journals?

She emphasizes a chain of fundamentals: identify an interesting, novel gap and formulate research questions; choose the right journal and align the topic with that journal’s audience; write the manuscript as more than facts by tying findings into an engaging narrative; keep citations up to date; and respond effectively to peer review feedback. After acceptance, proofing and editing still matter, and after publication, promotion through networks and social media helps ensure the work is discovered and cited.

How does collaboration change research output, and what keeps collaborations from falling apart?

Kendra describes collaboration as the biggest acceleration lever compared with starting as a solo researcher. Her first paper spawned multiple follow-on papers from the same group, suggesting momentum built through teamwork. She highlights commitment as the core requirement: collaborators must stay engaged over time, and teams need a mix of skills (research strengths, writing ability, structure, and so on). She also stresses transparency—clear responsibilities, deadlines, and honest communication when timelines slip.

How does “publish or perish” affect research quality and ethics?

Kendra frames the pressure as a quantity-focused metric that can compromise quality. It may encourage fragmenting one strong study into multiple papers, and it can also lead to neglect of teaching responsibilities. She links the system to mental health strain and the risk of ethical misconduct, including fabricating or manipulating data in extreme cases. She also points to broader concerns like replicability crises in academia.

What changes in academic integrity when generative AI enters the picture?

Kendra says the overall number of deliberate breaches may not rise dramatically, but the *type* of breach shifts. Before generative AI, plagiarism and contract cheating were prominent. With generative AI, institutions report more issues involving falsification and fabrication, including hallucination and misattribution. She notes that institutions have increased detection and monitoring resources and also rely on education rather than detection alone.

What does Kendra’s ethical AI framework try to achieve, and how is it structured?

Her framework balances generative AI’s learning benefits with ethical use through eight strategies grouped into three areas: students, educators, and institutions. For students, it includes empowering integrity via collaborative learning, strengthening critical thinking to validate AI outputs, and building AI literacy. For educators, it involves equipping staff and using intelligent tutoring and personalized learning approaches that reduce incentives for misconduct. For institutions, it requires infrastructure (including tools and policies), decisions about what is allowed, and fostering a culture of academic integrity.

Review Questions

  1. Which specific publishing choices (topic timing, paper type, journal targeting) most directly explain the citation spike in Kendra’s example?
  2. How does Kendra connect integrity risks to the incentives created by “publish or perish”?
  3. In Kendra’s framework, what roles do students, educators, and institutions each play in reducing unethical AI use?

Key Points

  1. 1

    Citations can rise quickly when a paper is published early on a widely searched topic and becomes a default reference for later work.

  2. 2

    Journal selection is not just about prestige; aligning the manuscript’s topic and audience fit can improve acceptance and downstream visibility.

  3. 3

    Manuscripts need coherence beyond facts—turning research into an engaging narrative helps readers and reviewers understand the contribution.

  4. 4

    Collaboration accelerates research output, but it depends on commitment, complementary skills, and transparency about responsibilities and deadlines.

  5. 5

    “Publish or perish” incentives can shift behavior toward quantity, fragmenting research and increasing ethical and mental-health risks.

  6. 6

    Generative AI may not massively increase cheating rates overall, but it changes misconduct patterns toward hallucination, misattribution, falsification, and fabrication.

  7. 7

    Ethical AI use is best handled through an ecosystem approach: student integrity skills, educator training and assessment design, and institutional policies and support.

Highlights

Kendra’s citation surge is linked to speed to market: writing and submitting early while chat GPT was still new, then pairing it with a theory (constructivism) that fit existing higher-education research.
Targeting an open-access journal helped early reach, and professional knowledge about an editor’s plan mattered—by submission time the journal had moved to Q1.
Kendra argues generative AI changes the *kind* of integrity breaches more than the overall number: plagiarism shifts toward hallucination, misattribution, and fabrication.
Her eight-strategy ethical AI framework spans students, educators, and institutions, emphasizing critical thinking, AI literacy, collaborative learning, and authentic assessments.
Rejection is reframed as iterative process and misalignment, not personal failure—persistence and timely revision are treated as essential skills.

Topics

  • Journal Selection
  • Citation Strategy
  • AI Academic Integrity
  • Ethical AI Framework
  • Research Collaboration

Mentioned