The Only AI Tools I’d Spend My Last $20 On (PhD Edition)
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Time AI subscriptions to the weeks when each tool is actually needed, and rely on free tiers when possible to stretch a tight PhD budget.
Briefing
For PhD students, the biggest AI money-saver isn’t buying more subscriptions—it’s timing them. Instead of paying for a full suite of tools all at once, subscriptions should be staggered to match the exact weeks when they deliver the most value, with an emphasis on free tiers when they’re “good enough.” That approach matters because $20 can stretch across groceries, rent-adjacent essentials, or a single month of software; the practical goal is to spend only when AI directly removes weeks of work.
The first paid pick is SciSpace (spelled “Sci-pace” in the transcript), positioned as a research assistant for the earliest, hardest phase of a PhD: figuring out what’s already been done. SciSpace helps users locate relevant literature, then compress it into fast, readable summaries (“Too long, didn’t read”) with clickable references that lead back to actual papers. It also supports exploratory workflows—running a query like “the most efficient OPV devices,” producing a structured set of key breakthroughs and references, and letting researchers save and organize papers in a library. A notable feature is the ability to upload papers and build quick comparison views (such as adding columns tied to what the researcher wants to extract from each study), turning the literature “bottomless pit” into something navigable.
Once the literature is under control, the bottleneck shifts from discovery to writing. For drafting and editing, the transcript favors general large language models—specifically ChatGPT and Claude—because they adapt to a researcher’s tone and fit into iterative academic workflows. The recommended method uses a “canvas” workspace to avoid regenerating everything from scratch. The workflow is three steps: provide a scaffold (often by feeding an existing abstract as a model), ask the model to read and then generate new text in that style, and iterate by moving back and forth between the scaffold and the evolving draft. The key is continuous editing rather than one-shot generation, which helps keep outputs on track and less generic.
More aggressive “agent-like” tools are also mentioned, including GenSpark, which can take figures and generate a full paper draft with sections like abstract, introduction, materials and methods, captions, and references. But academic publishing rules limit how much “done-for-you” drafting can be used, because it removes decision-making that researchers must own.
For feedback—especially when supervisor comments can take weeks—the transcript highlights ThesisAI as a paid tool for near-instant critique. It produces structured feedback across elements such as title and abstract, introduction, thesis statement, and whether arguments pass the “so what” test. It also surfaces actionable next steps like suggested topics, opportunities for further work, and resources including journals, conferences, and grant matching.
Across all recommendations, the throughline is division of labor: AI should handle repeatable grunt work, while researchers keep responsibility for creativity, unexpected insights, and challenging assumptions—areas where predictive text generation can’t replace genuine scientific judgment. The practical advice ends where it began: identify the task that causes the most dread, pay for the tool that best accelerates it, and use AI to reduce stress while increasing output.
Cornell Notes
The transcript argues that PhD students get the most value from AI by paying only when a tool is actively needed, rather than subscribing to everything at once. SciSpace is recommended for early-stage research because it finds literature, summarizes papers into quick snapshots, and provides clickable references plus a library for saving and organizing PDFs. For writing, ChatGPT and Claude are favored, using a canvas-based workflow where researchers supply a scaffold (like an abstract), generate draft text, and iterate through back-and-forth edits to avoid generic output. ThesisAI is presented as a feedback accelerator that critiques sections like thesis statements and the “so what” importance test, and it can suggest next steps, venues, and even grant matches. The overall message: AI can automate much of the routine work, but researchers must still drive creativity and scientific breakthroughs.
Why does the transcript treat subscription timing as a “money-saving hack” for PhD students?
What specific problem does SciSpace target in the early PhD stage, and how does it help?
How does the recommended writing workflow reduce generic AI output?
What’s the trade-off with figure-to-paper tools like GenSpark?
What kinds of feedback does ThesisAI provide, and why is that useful during revision cycles?
What division of labor does the transcript propose between AI and researchers?
Review Questions
- Which stage of a PhD does SciSpace focus on, and what features make it suitable for that stage?
- Describe the scaffold-and-canvas writing workflow using ChatGPT or Claude. Why does iteration matter?
- What does ThesisAI evaluate beyond surface-level grammar, and how does that change the revision process?
Key Points
- 1
Time AI subscriptions to the weeks when each tool is actually needed, and rely on free tiers when possible to stretch a tight PhD budget.
- 2
Use SciSpace early to turn the literature search into a structured workflow with summaries, clickable references, and a personal library.
- 3
For academic writing, prefer ChatGPT or Claude with a canvas-based iterative process rather than one-shot generation.
- 4
Reduce generic output by providing a scaffold (e.g., an abstract) and repeatedly editing the draft in response to the model’s output.
- 5
Treat figure-to-paper agents like GenSpark as powerful but constrained by current academic publishing expectations around researcher ownership.
- 6
Use ThesisAI during revision to get structured, actionable feedback quickly, including “so what” importance checks and suggestions for next steps.
- 7
Keep researchers responsible for creativity and scientific breakthroughs; AI should handle repeatable tasks that can be outsourced.