Get AI summaries of any video or article — Sign up free
The Secret List of AI Tools Universities Don’t Want You Using thumbnail

The Secret List of AI Tools Universities Don’t Want You Using

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The transcript distinguishes between prompt-based AI assistance and “done-for-you” auto writing that reduces sentence-level author control.

Briefing

Universities and journals are increasingly drawing a line around “done-for-you” AI writing tools—systems that can generate full literature reviews, paper drafts, and even systematic reviews with minimal human input. The core concern is not basic prompt assistance (which many academics tolerate), but automation that produces ready-to-submit academic text while leaving researchers less control over each sentence, paragraph, and citation.

The transcript contrasts two categories of AI use. On the acceptable side are tools used for prompt writing and augmentation—ChatGPT, Claude, Perplexity, and Gemini—where the user actively shapes the output. On the uncomfortable side are “auto writers,” which stream gray, AI-written text directly into the drafting workflow. Examples named include Jenny AI and Yomu. These tools can suggest or even insert content as someone writes, including references, which makes them feel like a gray-zone shortcut. The key issue raised is that academic integrity hinges on author control: if a system effectively writes for the user, it triggers fear among “journals and universities” that the work may be produced without sufficient human authorship.

Several “red box” tools are then described as capable of producing substantial academic deliverables quickly. Thesis AI is presented as a fast route to dense, fully referenced literature reviews—reported as producing a 44-page output in about 20 minutes, with links and export options to Word, LaTeX, and Overleaf. Gatsby is described as a broader research-and-writing assistant that can help discover research ideas, draft scientific papers, and run meta-analysis, with the transcript claiming it can generate a first paper draft from an idea document and help identify gaps.

The transcript also highlights agentic AI tools that can turn inputs like figures into structured academic writing. Manisim (spelled as “Manis” in the transcript) is described as generating a formatted paper draft that includes sections such as experimental methods and results/discussion, with references included but not necessarily the figures. GenSpark is described as using figures to produce a story structure and then a full paper draft that places figures in the correct sections and extracts acronyms from the images (including “silver nanowires” and “carbon nano nanot tubes”).

Finally, the transcript points to tools like elicit, which it says can automate multiple research tasks—report writing, paper search, deep review, literature review generation, and even PowerPoint creation. The fear is framed as an “unknown” risk: editors and institutions worry about misuse and cheating, while researchers on the ground are portrayed as excited by the time savings. The closing prediction is that some of these “not allowed” tools will gradually be permitted as academia adapts, moving toward a future where systematic reviews and other labor-intensive tasks can be generated through a few prompts and clicks.

Cornell Notes

The transcript draws a boundary between AI used as a writing assistant and AI used as a “done-for-you” auto writer. Tools that stream AI-written text or generate full literature reviews, paper drafts, and systematic reviews with minimal oversight are treated as a compliance and integrity risk. Named examples include Jenny AI and Yomu (auto-writing in-line), Thesis AI (dense, fully referenced literature reviews), Gatsby (paper drafting and meta-analysis), and agentic tools like Manisim and GenSpark that can structure papers from figures. elicit is presented as an automation suite for research tasks, including literature reviews and even slide creation. The stakes are authorship control: institutions worry that automation reduces human responsibility for each sentence and citation.

What makes an AI writing tool “acceptable” versus “off-limits” in the transcript’s framing?

Acceptable tools are used for prompt-based augmentation—helping shape ideas, wording, or structure while the user remains in control. Off-limits tools are “done-for-you” systems that generate substantial academic text with little manual sentence-by-sentence control. The transcript emphasizes that in-line auto writing (gray text inserted as someone drafts) and one-click outputs (full literature reviews, drafts, systematic reviews) create a gray zone because the user may not be authoring each part of the work.

How do Jenny AI and Yomu illustrate the “auto writer” concern?

Jenny AI and Yomu are described as streaming AI-written gray text during drafting, with prompts like “accept that” to continue. The transcript notes that these tools can also insert references automatically, even when a reference may not be needed. The concern is that the writing process becomes less about the researcher controlling each sentence and more about approving AI-generated content.

What output capabilities are attributed to Thesis AI?

Thesis AI is described as producing fully referenced, detailed literature reviews on many topics. The transcript claims it can generate about 30±5 papers’ worth of material, producing a dense, linked document reported as 44 pages after roughly 20 minutes. It also claims export options to Word, LaTeX, and Overleaf, and that references can be found via Semantic Scholar rather than being manually provided.

How are Gatsby, Manisim, and GenSpark positioned relative to “first drafts” and figures?

Gatsby is presented as a tool that can turn research ideas into a first paper draft and help interrogate gaps in the resulting narrative, with added capabilities like meta-analysis. Manisim is described as generating a formatted academic paper draft from inputs such as figures, including experimental methods and results/discussion, with references included but not necessarily the figures themselves. GenSpark is described as pulling figures into the correct sections and extracting acronyms from the figures, then producing a story structure and a full paper draft.

Why does the transcript say universities and journal editors are cautious about these tools?

Caution is attributed to fear of cheating and uncertainty about how powerful automation has become. Editors and institutions are portrayed as worried that researchers could outsource too much of the writing and review process, reducing human authorship and accountability. The transcript contrasts this with researchers’ excitement about time savings for tasks that are typically slow and labor-intensive.

What future outcome does the transcript predict for “red box” tools?

It predicts a gradual shift from prohibition to acceptance. As academia adapts and learns how these tools can be used responsibly, some systems currently treated as off-limits are expected to become allowed, ultimately enabling more automated systematic reviews and other research workflows through prompts and clicks.

Review Questions

  1. Which specific behaviors (e.g., in-line gray text, automatic reference insertion, one-click systematic reviews) most threaten the “human control” standard described in the transcript?
  2. Compare the roles of Thesis AI, Gatsby, and agentic figure-based tools (Manisim, GenSpark) in the drafting workflow—what each one is claimed to automate.
  3. What arguments does the transcript offer for why institutional policy lags behind research tool capabilities, and how does it connect that to future adoption?

Key Points

  1. 1

    The transcript distinguishes between prompt-based AI assistance and “done-for-you” auto writing that reduces sentence-level author control.

  2. 2

    In-line auto-writing tools like Jenny AI and Yomu are treated as a gray-zone risk because they stream AI text and can insert references automatically.

  3. 3

    Thesis AI is described as generating dense, fully referenced literature reviews quickly, with export options to Word, LaTeX, and Overleaf.

  4. 4

    Gatsby is positioned as a broader research and drafting assistant that can produce first paper drafts and support tasks like meta-analysis.

  5. 5

    Agentic tools such as Manisim and GenSpark are described as turning figure inputs into structured academic sections and, in some cases, extracting acronyms from images.

  6. 6

    elicit is presented as an automation suite for research tasks (search, deep review, literature reviews) and even PowerPoint generation.

  7. 7

    Institutional caution is framed as a mix of integrity concerns and uncertainty, with a prediction that some tools will eventually be permitted as norms evolve.

Highlights

The transcript’s central line is about authorship control: tools that write for you (not just help you write) trigger institutional pushback.
Jenny AI and Yomu are portrayed as especially sensitive because they insert gray, AI-written text during drafting and can add references on the fly.
Thesis AI is described as capable of producing a fully referenced, 44-page literature review in about 20 minutes, with exports to Word, LaTeX, and Overleaf.
GenSpark is described as using figures to place them into the correct paper sections and extract acronyms directly from the images.
The closing prediction: today’s “not allowed” tools will likely become gradually permitted as academia catches up to their real capabilities.