AI Tools Academics SWEAR By (and the Ones to AVOID!)
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Thesis AI earns S tier for turning many PDFs/references into a literature review quickly from a single prompt.
Briefing
A clear split emerges between AI tools that genuinely accelerate academic work and tools that either cost too much, feel unfinished, or don’t add enough beyond what cheaper alternatives already do. The strongest “must-use” cluster centers on literature review generation, paper verification, and claim-level feedback—areas where time savings are immediate and the output can be checked against sources.
At the top of the list sits Thesis AI, which turns a batch of PDFs/references into a literature review from a single prompt. The workflow is framed as nearly one-button: provide up to dozens of references, wait briefly, and return with a structured review. That “magic” effect is the reason it lands in S tier, even with caveats around how the free tier has changed and how the tool fits different research fields.
A second S-tier pair targets different parts of the academic writing pipeline. Thesi is rated as potentially supervisor-grade: feed it a thesis chapter or peer-reviewed paper draft, and it flags unsupported claims and other issues with detailed feedback—positioned as a direct substitute for time-consuming rounds of evidence checking. Jenny AI also earns an A-tier slot for academic writing assistance, with the caveat that its workflow can overwhelm new users. The tradeoff is speed and completeness: it handles multiple writing steps at once, which younger academics may adapt to more easily than established researchers.
For quality control and “final check” tasks, Paperpal (by Editage) is treated as a practical staple. It supports chat-with-PDF and document workflows, plus plagiarism and submission checks, and it’s described as usable for a lot of routine academic writing maintenance—especially when writing regularly. Consensus is another early-career-friendly tool, delivering a “consensus meter” and structured snapshots of research questions (including study types like randomized controlled trials and observational or animal studies), aiming to reduce hours of manual literature scanning.
On the research discovery side, Elicit and Connected Papers are praised but with different strengths. Elicit is highlighted for semantic search and systematic-review style outputs, including the ability to add columns per paper and filter by relevance. Connected Papers is valued for visual exploration and for surfacing derivative and prior works, but it’s criticized for interpretability and is ranked lower because it’s most useful for specific “what came before/after this seed paper?” questions.
Several tools land mid-to-low tiers due to pricing, limited scope, or usability problems. Schy is placed in C tier mainly because it’s expensive for students despite doing summarization/organization well. Lateral is B tier for strong features but considered pricey relative to alternatives. Insightful is C tier because it appears to be outdated except for a distinctive “connect two papers” feature. A stylist-paid review for a tool called Stylist ends in an F because it was buggy and unusable at test time, with hopes for future improvement.
The overall message is pragmatic: the best tools are those that reduce the most painful academic bottlenecks—turning PDFs into reviews, verifying claims, and quickly mapping what the literature agrees on—while weaker options tend to be either too costly, too narrow, or not stable enough for research-grade work yet.
Cornell Notes
The strongest academic AI tools in this tier list are those that compress high-effort tasks—especially literature review creation, evidence checking, and research-question “consensus” scanning—into fast, source-grounded workflows. Thesis AI earns S tier for generating literature reviews from a single prompt using many PDFs/references, described as “magic” in speed and usability. Thesi is also S tier for claim-level feedback: it reviews thesis chapters or papers and flags unsupported or incorrect claims with evidence-driven guidance. Paperpal (Editage) is positioned as a must-use for final checks (plagiarism/submission and document workflows), while Consensus and Elicit focus on structured research snapshots and semantic search. Tools that are buggy, outdated, or expensive relative to their feature set fall into lower tiers.
Why does Thesis AI rank as the top literature-review tool here?
What makes Thesi feel like “supervisor-grade” feedback?
How do Paperpal and Consensus differ in their academic value?
What are Elicit’s strengths compared with Connected Papers?
Which tools get downgraded primarily due to cost or stability—and why?
Why does the list treat writing assistants differently for older vs younger academics?
Review Questions
- Which tool(s) in the list are best suited for generating a literature review from existing PDFs, and what specific workflow feature earns them top ranking?
- Compare how Elicit and Connected Papers support literature discovery—what each does well, and what limitation keeps Connected Papers from matching Elicit’s usefulness?
- What kinds of problems push tools into lower tiers here (e.g., pricing, outdated updates, bugs), and which named examples illustrate each category?
Key Points
- 1
Thesis AI earns S tier for turning many PDFs/references into a literature review quickly from a single prompt.
- 2
Thesi is rated S tier for claim-level feedback—flagging unsupported or incorrect claims in thesis/paper drafts with evidence-driven guidance.
- 3
Paperpal (Editage) is positioned as a must-use for academic writing quality control, including plagiarism and submission checks plus document workflows.
- 4
Consensus and Elicit target different discovery needs: Consensus provides a structured “consensus meter” snapshot of research questions, while Elicit emphasizes semantic search and systematic-review style outputs.
- 5
Connected Papers is useful for visual exploration of paper relationships, especially derivative/prior works, but it’s less favored due to graph interpretability and limited fit for broader discovery.
- 6
Several tools drop in rank due to student-unfriendly pricing (Schy), outdated updates (Insightful), or poor stability/usability at test time (Stylist).