Get AI summaries of any video or article — Sign up free
This AI Makes Winning Research Grants Stupidly Easy thumbnail

This AI Makes Winning Research Grants Stupidly Easy

Andy Stapleton·
4 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Thesify is presented as an AI assistant that supports both academic writing and grant proposal preparation using rubric-based, field-targeted feedback.

Briefing

Thesify is positioning itself as an end-to-end AI assistant for academics—one that doesn’t just polish writing, but also helps researchers decide what to do next and where to submit, with the explicit goal of improving odds of getting published and funded. After uploading a document (paper, thesis, essay, or grant proposal), the system guides users through a structured workflow, then delivers field-aware feedback using rubrics and targeted recommendations.

For manuscript drafts, Thesify’s core value is granular critique paired with actionable upgrades. Users get an overall assessment plus section-by-section feedback (including areas like introduction, purpose, and other rubric-aligned components). Newer features highlighted in the transcript include title and abstract improvement: the tool provides rubric-based evaluations of both elements and offers detailed instructions for strengthening them. A chat interface—described with a playful “robotic cat” persona (“Theo”)—lets users ask follow-up questions, with the assistant using the uploaded document, the rubric framework, and the system’s prior feedback summary to generate tailored advice. Users can also export the chat and feedback for offline work, including exporting feedback to a PDF to reduce screen dependence.

Beyond writing, Thesify adds a research-planning layer that turns a finished draft into a set of usable outputs. The interface surfaces keywords, main claims (take-home messages), and research questions—items that can directly support journal submissions and help shape future work. It also recommends publications, conferences, and journals, including journal matching tied to impact factors, so researchers can justify submission targets to supervisors. A collection feature lets users save items for later review, while the “grants” section lists potential funding opportunities aligned to the work presented.

The transcript also emphasizes a parallel workflow for grant proposals. After uploading a grant, Thesify provides targeted feedback designed to match what funding bodies look for. The system breaks down performance into a feedback summary and component-level assessment, flags superfluous components (areas where too much detail could bore reviewers), and offers title-specific feedback. It then prompts users to refine key sections such as background, goals, methods, impact, and partners—again with follow-up Q&A available through Theo.

Finally, the pricing is presented as a justification for the tool’s breadth: Thesify is described as costing $499 per month. The pitch frames Thesify as a comprehensive academic workflow—writing improvement, submission targeting, and grant readiness—aimed at converting research effort into higher publication and funding success.

Cornell Notes

Thesify is presented as an AI workflow for academics that goes beyond editing. After uploading a paper or grant proposal, it uses rubric-based feedback to assess sections like purpose, introduction, and—newly highlighted—title and abstract, then generates targeted, actionable recommendations through a chat interface (“Theo”). The tool also extracts practical submission materials such as keywords and main claims, suggests research questions, and recommends journals, conferences, and grants matched to the work. For grant applications, it provides component-level critique, flags superfluous detail, and reviews background, goals, methods, impact, and partners to improve alignment with funding expectations. Offline export (including PDF) is offered to support focused revision.

How does Thesify help improve an academic paper draft beyond basic grammar or rewriting?

It provides rubric-based feedback tied to the user’s document and field. After uploading a PDF or other draft, the interface returns an overall assessment plus targeted section feedback (for example, areas like introduction and purpose). It also includes title and abstract evaluation using a rubric, along with detailed instructions for improving those elements. The chat (“Theo”) can then generate follow-up, document-specific guidance based on the rubric and the system’s prior feedback summary.

What new capabilities are highlighted for title and abstract work?

The transcript calls out “title and abstract” as a new feature. Users get a rubric-based assessment of both items and can hover over icons to see what each rubric component does. The tool then provides detailed instruction on how to improve the title and abstract, with feedback targeted to the paper and the user’s study area.

What outputs does Thesify generate that can directly support journal submission and future research planning?

It surfaces keywords for journal submission, main claims (take-home messages) to ensure the paper communicates what the author intends, and research questions for what to explore next. It also lists opportunities such as publications and conferences, and it supports saving items into collections for later review.

How does Thesify recommend where to publish or present work?

For journals, it provides a match that includes impact factors, giving researchers a defensible shortlist to discuss with a supervisor. It also recommends conferences where the work could be presented, and it offers publications related to the submitted content.

What does Thesify do differently for grant proposals compared with papers?

After uploading a grant, it delivers targeted feedback aimed at impressing funding bodies. The tool provides a feedback summary and component-level assessment, identifies superfluous components (detail that could bore reviewers), and includes title feedback. It then focuses on key grant sections—background, goals, methods, impact, and partners—while allowing follow-up questions through Theo.

Review Questions

  1. What rubric-based feedback categories does Thesify provide for papers, and how does that feedback translate into specific revision actions?
  2. Which submission-related artifacts (e.g., keywords, main claims) does Thesify extract, and how could they be used during manuscript preparation?
  3. For grant proposals, how does Thesify handle both missing components and “superfluous” detail, and which grant sections receive targeted review?

Key Points

  1. 1

    Thesify is presented as an AI assistant that supports both academic writing and grant proposal preparation using rubric-based, field-targeted feedback.

  2. 2

    Uploading a draft triggers a guided workflow that produces an overall assessment plus section-level critique, including areas like purpose and introduction.

  3. 3

    Newly emphasized features include rubric-based evaluation and improvement instructions for title and abstract.

  4. 4

    The assistant can generate targeted follow-up advice through a chat interface (“Theo”) and supports exporting feedback (including PDF) for offline revision.

  5. 5

    Beyond editing, Thesify extracts submission-ready elements such as keywords and main claims and suggests research questions, publications, conferences, and journals matched by impact factors.

  6. 6

    For grant proposals, it provides component-level feedback, flags superfluous detail, and reviews background, goals, methods, impact, and partners to better align with funding expectations.

  7. 7

    Pricing is stated as $499 per month, positioned as justified by the tool’s breadth of academic and funding support.

Highlights

Thesify’s rubric-based feedback targets not just writing quality but the structure reviewers expect—especially for titles, abstracts, and grant components.
A “Theo” chat workflow turns uploaded documents into interactive, document-specific revision guidance, with export options for offline work.
The platform links research outputs to next steps: keywords, main claims, research questions, journal/conference recommendations, and grant opportunities matched to the work.
For grant proposals, the system flags “superfluous components,” aiming to reduce reviewer boredom while strengthening the proposal’s core sections.

Topics