The Secret List of AI Tools Universities Don’t Want You Using
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
The transcript distinguishes between prompt-based AI assistance and “done-for-you” auto writing that reduces sentence-level author control.
Briefing
Universities and journals are increasingly drawing a line around “done-for-you” AI writing tools—systems that can generate full literature reviews, paper drafts, and even systematic reviews with minimal human input. The core concern is not basic prompt assistance (which many academics tolerate), but automation that produces ready-to-submit academic text while leaving researchers less control over each sentence, paragraph, and citation.
The transcript contrasts two categories of AI use. On the acceptable side are tools used for prompt writing and augmentation—ChatGPT, Claude, Perplexity, and Gemini—where the user actively shapes the output. On the uncomfortable side are “auto writers,” which stream gray, AI-written text directly into the drafting workflow. Examples named include Jenny AI and Yomu. These tools can suggest or even insert content as someone writes, including references, which makes them feel like a gray-zone shortcut. The key issue raised is that academic integrity hinges on author control: if a system effectively writes for the user, it triggers fear among “journals and universities” that the work may be produced without sufficient human authorship.
Several “red box” tools are then described as capable of producing substantial academic deliverables quickly. Thesis AI is presented as a fast route to dense, fully referenced literature reviews—reported as producing a 44-page output in about 20 minutes, with links and export options to Word, LaTeX, and Overleaf. Gatsby is described as a broader research-and-writing assistant that can help discover research ideas, draft scientific papers, and run meta-analysis, with the transcript claiming it can generate a first paper draft from an idea document and help identify gaps.
The transcript also highlights agentic AI tools that can turn inputs like figures into structured academic writing. Manisim (spelled as “Manis” in the transcript) is described as generating a formatted paper draft that includes sections such as experimental methods and results/discussion, with references included but not necessarily the figures. GenSpark is described as using figures to produce a story structure and then a full paper draft that places figures in the correct sections and extracts acronyms from the images (including “silver nanowires” and “carbon nano nanot tubes”).
Finally, the transcript points to tools like elicit, which it says can automate multiple research tasks—report writing, paper search, deep review, literature review generation, and even PowerPoint creation. The fear is framed as an “unknown” risk: editors and institutions worry about misuse and cheating, while researchers on the ground are portrayed as excited by the time savings. The closing prediction is that some of these “not allowed” tools will gradually be permitted as academia adapts, moving toward a future where systematic reviews and other labor-intensive tasks can be generated through a few prompts and clicks.
Cornell Notes
The transcript draws a boundary between AI used as a writing assistant and AI used as a “done-for-you” auto writer. Tools that stream AI-written text or generate full literature reviews, paper drafts, and systematic reviews with minimal oversight are treated as a compliance and integrity risk. Named examples include Jenny AI and Yomu (auto-writing in-line), Thesis AI (dense, fully referenced literature reviews), Gatsby (paper drafting and meta-analysis), and agentic tools like Manisim and GenSpark that can structure papers from figures. elicit is presented as an automation suite for research tasks, including literature reviews and even slide creation. The stakes are authorship control: institutions worry that automation reduces human responsibility for each sentence and citation.
What makes an AI writing tool “acceptable” versus “off-limits” in the transcript’s framing?
How do Jenny AI and Yomu illustrate the “auto writer” concern?
What output capabilities are attributed to Thesis AI?
How are Gatsby, Manisim, and GenSpark positioned relative to “first drafts” and figures?
Why does the transcript say universities and journal editors are cautious about these tools?
What future outcome does the transcript predict for “red box” tools?
Review Questions
- Which specific behaviors (e.g., in-line gray text, automatic reference insertion, one-click systematic reviews) most threaten the “human control” standard described in the transcript?
- Compare the roles of Thesis AI, Gatsby, and agentic figure-based tools (Manisim, GenSpark) in the drafting workflow—what each one is claimed to automate.
- What arguments does the transcript offer for why institutional policy lags behind research tool capabilities, and how does it connect that to future adoption?
Key Points
- 1
The transcript distinguishes between prompt-based AI assistance and “done-for-you” auto writing that reduces sentence-level author control.
- 2
In-line auto-writing tools like Jenny AI and Yomu are treated as a gray-zone risk because they stream AI text and can insert references automatically.
- 3
Thesis AI is described as generating dense, fully referenced literature reviews quickly, with export options to Word, LaTeX, and Overleaf.
- 4
Gatsby is positioned as a broader research and drafting assistant that can produce first paper drafts and support tasks like meta-analysis.
- 5
Agentic tools such as Manisim and GenSpark are described as turning figure inputs into structured academic sections and, in some cases, extracting acronyms from images.
- 6
elicit is presented as an automation suite for research tasks (search, deep review, literature reviews) and even PowerPoint generation.
- 7
Institutional caution is framed as a mix of integrity concerns and uncertainty, with a prediction that some tools will eventually be permitted as norms evolve.