Get AI summaries of any video or article — Sign up free
Litmaps Future Ready Scholar Conference - Day 1 thumbnail

Litmaps Future Ready Scholar Conference - Day 1

Litmaps·
6 min read

Based on Litmaps's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Most researchers expect AI to write a large share of papers within the next decade, which raises integrity and verification challenges alongside productivity gains.

Briefing

AI is accelerating research output faster than academic integrity systems can keep up—so the central challenge is not whether researchers should use AI tools, but how to use them responsibly while preserving trust, originality, and verifiable scholarship. Across the conference’s opening framing, most attendees agreed with a bold forecast: by 2035, AI will write most research papers. That expectation matters because it implies a shift in daily research practice—literature review, writing, analysis, and even peer-review workflows—alongside rising risks like paper mills, compromised peer review, and retractions.

The opening session laid out a connected set of pressures reshaping academia. Generative AI and “AI-assisted” workflows are expected to become widespread within a few years, helping with tasks such as documentation, finding collaborators, and optimizing experimental design. But the integrity side is worsening too: fraudulent or unreliable papers are increasingly entering the publication pipeline, and retractions have climbed dramatically (to roughly 14,000 in the cited trend). Even when retractions are a signal of failure, they represent only a fraction of the broader problem—because the volume of literature keeps expanding. With millions of papers and publication rates that continue rising, researchers face a practical information-overload problem: how to conduct literature reviews and verify what’s already been done when the “needle” (relevant, trustworthy work) is buried in an ever-growing “haystack.”

The conference also broadened the horizon beyond today’s tools. Forecasts for artificial general intelligence (AGI) were discussed as a longer-term uncertainty, with some estimates suggesting a non-trivial chance of emergence by 2040. More immediate, however, is the way multi-step AI systems are changing what “research” can mean—potentially automating parts of literature review, synthesis, and even drafting. That automation raises a core identity question: who is a researcher in a post-AI world, when many tasks can be performed faster by systems trained on existing knowledge?

The first day’s practical thread came through in two AI-focused presentations from thesis for you and a broader integrity-focused talk from Professor Leonard Naki. thesis for you emphasized workflow acceleration for students and early-career researchers: using Litmaps to speed literature review via citation connections (and to build bibliographies), using GPT Excel for generating Excel formulas for quantitative analysis, and using tools like Kickresume to produce job-ready resumes and cover letters. They also addressed career anxiety directly—encouraging earlier job searching, targeted keyword alignment for applicant tracking systems, and using AI to support language and presentation practice.

Professor Naki’s contribution reframed the same landscape around academic integrity and verification. He highlighted “hallucinations” (fabricated citations and results) as a reason many academics initially rejected generative AI, and he argued that modern “deep research” and retrieval-augmented approaches are improving reliability but still require human verification. He also stressed that privacy and disclosure are part of responsible use: uploading sensitive data to cloud-based models can create privacy risks, and institutions need clearer norms for acknowledging AI assistance. Finally, he connected plagiarism and authorship debates to a deeper issue—academic systems still reward quantity, which can incentivize low-quality or deceptive outputs.

By the end of the day, the conference pivoted to an evaluation mindset: with too many tools to master, researchers need a repeatable checklist—does the tool use AI, where does its data come from, what biases follow from that data, and how well does it keep results current. The day’s message was clear: AI can boost productivity and accessibility, but the burden of verification, transparency, and intellectual ownership remains with the researcher.

Cornell Notes

The conference opening argues that AI will increasingly automate parts of research—writing, synthesis, and even literature review—while academic integrity systems struggle to keep pace. Attendees discussed rising publication volume, paper-mill and fraud risks, and the practical difficulty of verifying what’s already been done. thesis for you offered concrete tools for students’ workflows (Litmaps for connected literature discovery, GPT Excel for formula generation, and Kickresume for job materials), while Professor Leonard Naki focused on responsible use: hallucination risk, the need for human verification, disclosure norms, and privacy considerations. The takeaway is to adopt an evaluation framework for any new research tool: how it works, what data it uses, what biases it inherits, and whether it stays current—so researchers can use AI without losing trust in their work.

Why does “AI-written research” change the day-to-day work of researchers, beyond just drafting faster?

It shifts multiple stages of the workflow at once: literature review (finding relevant papers and gaps), writing and synthesis (drafting sections), and analysis (helping with formulas or code). That automation increases productivity, but it also increases integrity risk—more output doesn’t automatically mean more ethical behavior. The conference linked this to a broader environment where publication volume keeps rising, making verification harder and increasing the chance that unreliable work spreads before it’s corrected.

What integrity risks were emphasized, and why are retractions an incomplete signal?

The discussion highlighted paper mills, fake data, and compromised peer review as drivers of unreliable publications. Retractions were used as a measurable indicator of failure, with a cited sharp increase (around 14,000 in the trend mentioned). But retractions represent only a fraction of problematic work, because the overall literature volume is enormous and keeps growing—so many issues may never be caught quickly enough to prevent downstream citation and use.

How did thesis for you connect AI tools to practical student outcomes (thesis and jobs)?

They framed AI as workflow support rather than replacement. Litmaps was positioned as a faster literature-review path that finds relevant papers through citation connections and helps build a bibliography. GPT Excel was presented as an AI helper for generating Excel formulas for quantitative analysis. For career preparation, Kickresume was described as generating resumes and cover letters quickly (including keyword-focused formatting), with guidance to align resumes to job descriptions and applicant tracking systems.

What does Professor Leonard Naki mean by “hallucinations,” and why does it matter for citations?

Hallucinations are fabricated outputs—made-up results and, critically, fabricated citations that don’t exist. In academic research, citations are foundational for trust and for building on prior work. Even if AI improves drafting speed, incorrect references can undermine the validity of the research, which is why human verification of claims and sources remains essential.

What evaluation checklist did the conference recommend for deciding whether a new research tool is worth using?

The recommended questions were: (1) Does the tool use AI, and if so, how? (2) How is the data sourced, and what coverage or bias follows from that data? (3) Does it keep results up to date over time? This approach is meant to replace “try every tool” with a repeatable method for assessing reliability and fit for a specific field.

How did privacy concerns enter the integrity conversation?

Professor Naki warned that uploading sensitive data to cloud-based models can breach privacy, especially if the default behavior involves training on user data. He suggested a “zero trust” stance toward cloud tools and recommended running models locally using LM Studio to avoid sending data to external services when confidentiality is critical.

Review Questions

  1. When AI increases research output, what specific integrity failures become more likely, and why does the growth of literature volume make them harder to detect?
  2. Explain the difference between using AI for language assistance versus using AI to generate research claims and citations—what verification steps remain necessary?
  3. Using the conference’s evaluation checklist, how would you assess a new literature-review tool before trusting its recommendations?

Key Points

  1. 1

    Most researchers expect AI to write a large share of papers within the next decade, which raises integrity and verification challenges alongside productivity gains.

  2. 2

    Fraud, paper mills, and compromised peer review contribute to unreliable literature, and retractions—while rising—capture only part of the problem.

  3. 3

    The exploding volume of publications makes literature review and gap-finding harder, increasing the need for systematic, transparent evaluation of sources.

  4. 4

    Student-focused AI workflows emphasized connected-paper discovery (Litmaps), quantitative support (GPT Excel), and job-material generation (Kickresume), but these still require human oversight.

  5. 5

    Professor Leonard Naki emphasized that hallucinations (including fabricated citations) and black-box behavior mean researchers must verify claims and references before publication.

  6. 6

    Responsible AI use includes disclosure norms, privacy awareness, and avoiding over-reliance on plagiarism/AI-detection tools that can produce false positives.

  7. 7

    When facing many tools, researchers should use a repeatable evaluation framework: AI usage, data sourcing/coverage, inherited bias, and how well the tool stays current.

Highlights

A core tension emerged: AI can boost research productivity dramatically, but it doesn’t automatically improve ethics—so integrity systems and verification practices must evolve.
Retractions have risen sharply, yet they still represent only a slice of unreliable work in a literature ecosystem growing by the millions of papers.
Professor Leonard Naki tied academic trust to citations: hallucinated or fabricated references can break the chain of scholarly accountability.
The conference’s practical solution wasn’t “pick one tool,” but “evaluate tools consistently” using questions about AI method, data sourcing, bias, and freshness.
Privacy was treated as part of research responsibility: uploading sensitive data to cloud models can create risks, and local model tools like LM Studio were suggested as alternatives.

Topics

Mentioned