Litmaps Future Ready Scholar Conference - Day 1
Based on Litmaps's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Most researchers expect AI to write a large share of papers within the next decade, which raises integrity and verification challenges alongside productivity gains.
Briefing
AI is accelerating research output faster than academic integrity systems can keep up—so the central challenge is not whether researchers should use AI tools, but how to use them responsibly while preserving trust, originality, and verifiable scholarship. Across the conference’s opening framing, most attendees agreed with a bold forecast: by 2035, AI will write most research papers. That expectation matters because it implies a shift in daily research practice—literature review, writing, analysis, and even peer-review workflows—alongside rising risks like paper mills, compromised peer review, and retractions.
The opening session laid out a connected set of pressures reshaping academia. Generative AI and “AI-assisted” workflows are expected to become widespread within a few years, helping with tasks such as documentation, finding collaborators, and optimizing experimental design. But the integrity side is worsening too: fraudulent or unreliable papers are increasingly entering the publication pipeline, and retractions have climbed dramatically (to roughly 14,000 in the cited trend). Even when retractions are a signal of failure, they represent only a fraction of the broader problem—because the volume of literature keeps expanding. With millions of papers and publication rates that continue rising, researchers face a practical information-overload problem: how to conduct literature reviews and verify what’s already been done when the “needle” (relevant, trustworthy work) is buried in an ever-growing “haystack.”
The conference also broadened the horizon beyond today’s tools. Forecasts for artificial general intelligence (AGI) were discussed as a longer-term uncertainty, with some estimates suggesting a non-trivial chance of emergence by 2040. More immediate, however, is the way multi-step AI systems are changing what “research” can mean—potentially automating parts of literature review, synthesis, and even drafting. That automation raises a core identity question: who is a researcher in a post-AI world, when many tasks can be performed faster by systems trained on existing knowledge?
The first day’s practical thread came through in two AI-focused presentations from thesis for you and a broader integrity-focused talk from Professor Leonard Naki. thesis for you emphasized workflow acceleration for students and early-career researchers: using Litmaps to speed literature review via citation connections (and to build bibliographies), using GPT Excel for generating Excel formulas for quantitative analysis, and using tools like Kickresume to produce job-ready resumes and cover letters. They also addressed career anxiety directly—encouraging earlier job searching, targeted keyword alignment for applicant tracking systems, and using AI to support language and presentation practice.
Professor Naki’s contribution reframed the same landscape around academic integrity and verification. He highlighted “hallucinations” (fabricated citations and results) as a reason many academics initially rejected generative AI, and he argued that modern “deep research” and retrieval-augmented approaches are improving reliability but still require human verification. He also stressed that privacy and disclosure are part of responsible use: uploading sensitive data to cloud-based models can create privacy risks, and institutions need clearer norms for acknowledging AI assistance. Finally, he connected plagiarism and authorship debates to a deeper issue—academic systems still reward quantity, which can incentivize low-quality or deceptive outputs.
By the end of the day, the conference pivoted to an evaluation mindset: with too many tools to master, researchers need a repeatable checklist—does the tool use AI, where does its data come from, what biases follow from that data, and how well does it keep results current. The day’s message was clear: AI can boost productivity and accessibility, but the burden of verification, transparency, and intellectual ownership remains with the researcher.
Cornell Notes
The conference opening argues that AI will increasingly automate parts of research—writing, synthesis, and even literature review—while academic integrity systems struggle to keep pace. Attendees discussed rising publication volume, paper-mill and fraud risks, and the practical difficulty of verifying what’s already been done. thesis for you offered concrete tools for students’ workflows (Litmaps for connected literature discovery, GPT Excel for formula generation, and Kickresume for job materials), while Professor Leonard Naki focused on responsible use: hallucination risk, the need for human verification, disclosure norms, and privacy considerations. The takeaway is to adopt an evaluation framework for any new research tool: how it works, what data it uses, what biases it inherits, and whether it stays current—so researchers can use AI without losing trust in their work.
Why does “AI-written research” change the day-to-day work of researchers, beyond just drafting faster?
What integrity risks were emphasized, and why are retractions an incomplete signal?
How did thesis for you connect AI tools to practical student outcomes (thesis and jobs)?
What does Professor Leonard Naki mean by “hallucinations,” and why does it matter for citations?
What evaluation checklist did the conference recommend for deciding whether a new research tool is worth using?
How did privacy concerns enter the integrity conversation?
Review Questions
- When AI increases research output, what specific integrity failures become more likely, and why does the growth of literature volume make them harder to detect?
- Explain the difference between using AI for language assistance versus using AI to generate research claims and citations—what verification steps remain necessary?
- Using the conference’s evaluation checklist, how would you assess a new literature-review tool before trusting its recommendations?
Key Points
- 1
Most researchers expect AI to write a large share of papers within the next decade, which raises integrity and verification challenges alongside productivity gains.
- 2
Fraud, paper mills, and compromised peer review contribute to unreliable literature, and retractions—while rising—capture only part of the problem.
- 3
The exploding volume of publications makes literature review and gap-finding harder, increasing the need for systematic, transparent evaluation of sources.
- 4
Student-focused AI workflows emphasized connected-paper discovery (Litmaps), quantitative support (GPT Excel), and job-material generation (Kickresume), but these still require human oversight.
- 5
Professor Leonard Naki emphasized that hallucinations (including fabricated citations) and black-box behavior mean researchers must verify claims and references before publication.
- 6
Responsible AI use includes disclosure norms, privacy awareness, and avoiding over-reliance on plagiarism/AI-detection tools that can produce false positives.
- 7
When facing many tools, researchers should use a repeatable evaluation framework: AI usage, data sourcing/coverage, inherited bias, and how well the tool stays current.