Get AI summaries of any video or article — Sign up free
The Only AI Tools I’d Spend My Last $20 On (PhD Edition) thumbnail

The Only AI Tools I’d Spend My Last $20 On (PhD Edition)

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Time AI subscriptions to the weeks when each tool is actually needed, and rely on free tiers when possible to stretch a tight PhD budget.

Briefing

For PhD students, the biggest AI money-saver isn’t buying more subscriptions—it’s timing them. Instead of paying for a full suite of tools all at once, subscriptions should be staggered to match the exact weeks when they deliver the most value, with an emphasis on free tiers when they’re “good enough.” That approach matters because $20 can stretch across groceries, rent-adjacent essentials, or a single month of software; the practical goal is to spend only when AI directly removes weeks of work.

The first paid pick is SciSpace (spelled “Sci-pace” in the transcript), positioned as a research assistant for the earliest, hardest phase of a PhD: figuring out what’s already been done. SciSpace helps users locate relevant literature, then compress it into fast, readable summaries (“Too long, didn’t read”) with clickable references that lead back to actual papers. It also supports exploratory workflows—running a query like “the most efficient OPV devices,” producing a structured set of key breakthroughs and references, and letting researchers save and organize papers in a library. A notable feature is the ability to upload papers and build quick comparison views (such as adding columns tied to what the researcher wants to extract from each study), turning the literature “bottomless pit” into something navigable.

Once the literature is under control, the bottleneck shifts from discovery to writing. For drafting and editing, the transcript favors general large language models—specifically ChatGPT and Claude—because they adapt to a researcher’s tone and fit into iterative academic workflows. The recommended method uses a “canvas” workspace to avoid regenerating everything from scratch. The workflow is three steps: provide a scaffold (often by feeding an existing abstract as a model), ask the model to read and then generate new text in that style, and iterate by moving back and forth between the scaffold and the evolving draft. The key is continuous editing rather than one-shot generation, which helps keep outputs on track and less generic.

More aggressive “agent-like” tools are also mentioned, including GenSpark, which can take figures and generate a full paper draft with sections like abstract, introduction, materials and methods, captions, and references. But academic publishing rules limit how much “done-for-you” drafting can be used, because it removes decision-making that researchers must own.

For feedback—especially when supervisor comments can take weeks—the transcript highlights ThesisAI as a paid tool for near-instant critique. It produces structured feedback across elements such as title and abstract, introduction, thesis statement, and whether arguments pass the “so what” test. It also surfaces actionable next steps like suggested topics, opportunities for further work, and resources including journals, conferences, and grant matching.

Across all recommendations, the throughline is division of labor: AI should handle repeatable grunt work, while researchers keep responsibility for creativity, unexpected insights, and challenging assumptions—areas where predictive text generation can’t replace genuine scientific judgment. The practical advice ends where it began: identify the task that causes the most dread, pay for the tool that best accelerates it, and use AI to reduce stress while increasing output.

Cornell Notes

The transcript argues that PhD students get the most value from AI by paying only when a tool is actively needed, rather than subscribing to everything at once. SciSpace is recommended for early-stage research because it finds literature, summarizes papers into quick snapshots, and provides clickable references plus a library for saving and organizing PDFs. For writing, ChatGPT and Claude are favored, using a canvas-based workflow where researchers supply a scaffold (like an abstract), generate draft text, and iterate through back-and-forth edits to avoid generic output. ThesisAI is presented as a feedback accelerator that critiques sections like thesis statements and the “so what” importance test, and it can suggest next steps, venues, and even grant matches. The overall message: AI can automate much of the routine work, but researchers must still drive creativity and scientific breakthroughs.

Why does the transcript treat subscription timing as a “money-saving hack” for PhD students?

It frames $20 as a high-stakes budget for students, so paying for tools that aren’t used regularly wastes money. The recommended strategy is to stagger subscriptions so they align with the weeks when a tool is needed most (e.g., literature discovery early, drafting later, feedback during revision). When a tool isn’t used frequently, free versions or free tools can often deliver similar value, reducing spend while preserving productivity.

What specific problem does SciSpace target in the early PhD stage, and how does it help?

The transcript says the hardest early challenge isn’t running experiments—it’s identifying what research already exists. SciSpace helps by searching for relevant literature, generating fast summaries (including “Too long, didn’t read” style overviews), and providing clickable references to the underlying papers. It also supports exploratory queries (example given: “the most efficient OPV devices”), and it includes a library to save PDFs and an interface for organizing extracted information using custom columns.

How does the recommended writing workflow reduce generic AI output?

Instead of asking for a one-time rewrite, the workflow uses a scaffold. Researchers feed an example (such as an abstract from a paper) and instruct the model to read it and then generate new text in that style. Writing happens in a canvas workspace so changes can be made iteratively without regenerating everything. The transcript emphasizes back-and-forth editing—continually refining the draft—so the output stays aligned with the researcher’s tone and target journal style.

What’s the trade-off with figure-to-paper tools like GenSpark?

GenSpark is described as capable of turning figures into a full paper draft, including structured sections (abstract, introduction, materials and methods), captions, and references. The trade-off is academic publishing constraints: “done-for-you” drafting can remove too much researcher decision-making, and that’s not something researchers are generally comfortable using under current rules. The transcript calls it a glimpse of where AI is heading, but suggests sticking to tools that operate within publishing-appropriate boundaries.

What kinds of feedback does ThesisAI provide, and why is that useful during revision cycles?

ThesisAI is presented as a fast alternative to waiting weeks for supervisor comments. It generates feedback summaries across components like title and abstract, introduction, and thesis statement, including whether the thesis passes the “so what” test (i.e., whether the importance is clear). It also suggests topics and opportunities for further work, and it provides resources such as journals, conferences, and grant matching—turning revision into both improvement and planning for dissemination.

What division of labor does the transcript propose between AI and researchers?

AI is positioned as a tool for academic grunt work—summarizing, drafting assistance, and structured feedback—while researchers remain responsible for creativity and the intellectual leaps that drive breakthroughs. The transcript argues that AI is a predictive engine, so unexpected insights and challenging assumptions must still come from researchers to keep science deeply involved and original.

Review Questions

  1. Which stage of a PhD does SciSpace focus on, and what features make it suitable for that stage?
  2. Describe the scaffold-and-canvas writing workflow using ChatGPT or Claude. Why does iteration matter?
  3. What does ThesisAI evaluate beyond surface-level grammar, and how does that change the revision process?

Key Points

  1. 1

    Time AI subscriptions to the weeks when each tool is actually needed, and rely on free tiers when possible to stretch a tight PhD budget.

  2. 2

    Use SciSpace early to turn the literature search into a structured workflow with summaries, clickable references, and a personal library.

  3. 3

    For academic writing, prefer ChatGPT or Claude with a canvas-based iterative process rather than one-shot generation.

  4. 4

    Reduce generic output by providing a scaffold (e.g., an abstract) and repeatedly editing the draft in response to the model’s output.

  5. 5

    Treat figure-to-paper agents like GenSpark as powerful but constrained by current academic publishing expectations around researcher ownership.

  6. 6

    Use ThesisAI during revision to get structured, actionable feedback quickly, including “so what” importance checks and suggestions for next steps.

  7. 7

    Keep researchers responsible for creativity and scientific breakthroughs; AI should handle repeatable tasks that can be outsourced.

Highlights

SciSpace is framed as the best first subscription for early PhD work because it rapidly summarizes literature and links back to real papers, helping researchers escape the “bottomless pit.”
A canvas-based workflow with ChatGPT or Claude—using an existing abstract as a scaffold—aims to keep AI writing on-brand and less generic through continuous back-and-forth edits.
ThesisAI’s feedback goes beyond surface critique by checking whether claims pass the “so what” test and by offering next-step opportunities, venues, and grant matching.

Topics

Mentioned