Associate Professor’s Best Kept Secrets to Publishing Papers
Based on Academic English Now's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Citations can rise quickly when a paper is published early on a widely searched topic and becomes a default reference for later work.
Briefing
Getting a paper heavily cited isn’t just about picking a “good journal” or writing solid research—it’s about timing, topic selection, and packaging the work so other scholars can quickly build on it. Diane Kendra’s most-cited example centers on a 2023 paper on “the role of chat GPT in higher education benefits challenges and future research directions,” which reached roughly 270 citations within a year and landed in the top 1% of most cited papers. The citation surge is treated as the outcome of several coordinated choices: a fast-moving, high-interest topic; a theoretical anchor that matched existing literature; and a publication strategy designed to get the paper in front of researchers early.
Kendra frames publishing as a chain of decisions rather than a single trick. The basics start with identifying a novel gap and shaping clear research questions, then aligning the manuscript with the journal’s audience. She emphasizes manuscript construction beyond raw facts—turning findings into a coherent “story,” keeping citations current, and being ready for the realities of peer review, proofing, and post-acceptance editing. After publication, promotion still matters: sharing through networks and social media helps ensure the work is seen and cited.
For the citation breakthrough, the key driver was speed to market. The team began writing and researching as chat GPT was launched, aiming to publish quickly while the topic was still “hot.” They also chose a conceptual paper format and targeted an open-access journal to reach readers early. Even though the journal was initially ranked Q2, Kendra says the editor’s strategy for lifting it to Q1 was known through professional networks—and by submission time it had already become Q1. That combination—early publication on a widely searched topic—meant other researchers had a go-to reference point to cite.
The discussion widens from publishing mechanics to academic incentives and integrity. Kendra notes that “publish or perish” pressures can push quantity over quality, encouraging practices like slicing one study into multiple papers and potentially undermining teaching responsibilities. She links the metric-driven system to risks such as mental strain and, in extreme cases, ethical misconduct. On AI integrity, she argues that deliberate cheating has long existed, but generative AI changes the *type* of breach: plagiarism and contract cheating remain issues, while newer patterns include falsification, fabrication, hallucination, and misattribution. Institutions, she says, have responded by ramping up resources for detection and—crucially—education.
To balance generative AI’s benefits with responsible use, Kendra describes an eight-strategy framework across students, educators, and institutions. It centers on empowering integrity through collaborative learning, building critical thinking and AI literacy, using intelligent tutoring and personalized learning to reduce incentives for misconduct, and shifting toward authentic, project-based assessments. Educators need continuous upskilling, while institutions must provide infrastructure, policies, and a culture of academic integrity.
Underlying the whole conversation is a practical mindset: collaboration accelerates output and expands research into multiple follow-on papers, but it requires commitment, clear division of responsibilities, and transparency. Rejection, she adds, is not failure—it often reflects misalignment with a journal’s audience or an iterative review process. Motivation, finally, is less about metrics and more about mentoring and helping others develop their careers and capabilities.
Cornell Notes
Diane Kendra credits her paper’s rapid citation growth to coordinated publishing decisions: choosing a timely, high-interest topic (chat GPT in higher education), anchoring it in a dominant theory (constructivism), writing a conceptual piece, and targeting an open-access journal to reach researchers early. Speed to market mattered because early publication made the work a default reference for others, especially once the journal’s ranking improved to Q1 by submission. Beyond citations, she stresses that publishing success depends on aligning topic and manuscript with the journal’s audience, crafting a coherent narrative, and preparing for iterative peer review and revisions. She also argues that generative AI doesn’t necessarily increase cheating rates overall, but it changes misconduct patterns toward hallucination, misattribution, and fabrication—so education and integrity-focused assessment design are essential.
Why did Kendra’s chat GPT higher-education paper attract unusually fast citation growth?
What does Kendra say matters most in the mechanics of getting published in reputable journals?
How does collaboration change research output, and what keeps collaborations from falling apart?
How does “publish or perish” affect research quality and ethics?
What changes in academic integrity when generative AI enters the picture?
What does Kendra’s ethical AI framework try to achieve, and how is it structured?
Review Questions
- Which specific publishing choices (topic timing, paper type, journal targeting) most directly explain the citation spike in Kendra’s example?
- How does Kendra connect integrity risks to the incentives created by “publish or perish”?
- In Kendra’s framework, what roles do students, educators, and institutions each play in reducing unethical AI use?
Key Points
- 1
Citations can rise quickly when a paper is published early on a widely searched topic and becomes a default reference for later work.
- 2
Journal selection is not just about prestige; aligning the manuscript’s topic and audience fit can improve acceptance and downstream visibility.
- 3
Manuscripts need coherence beyond facts—turning research into an engaging narrative helps readers and reviewers understand the contribution.
- 4
Collaboration accelerates research output, but it depends on commitment, complementary skills, and transparency about responsibilities and deadlines.
- 5
“Publish or perish” incentives can shift behavior toward quantity, fragmenting research and increasing ethical and mental-health risks.
- 6
Generative AI may not massively increase cheating rates overall, but it changes misconduct patterns toward hallucination, misattribution, falsification, and fabrication.
- 7
Ethical AI use is best handled through an ecosystem approach: student integrity skills, educator training and assessment design, and institutional policies and support.