Grant Writing Is Broken - AI Just Exposed the Shortcut
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use a three-step AI workflow: discover grants, generate a first draft, then verify alignment with the official assessor/peer-review criteria.
Briefing
Grant writing is framed as a high-stakes form of “academic panhandling,” but AI tools can compress the hardest parts—finding funding, drafting sections, and aligning the narrative with what reviewers look for. The core workflow laid out is three-step: locate relevant grants, generate a first draft that includes the required components (especially background and literature), then run a structured review against the grant’s own peer-review or assessor criteria to maximize scoring.
For finding grants, two AI-assisted discovery routes are recommended. Sispace is positioned as a marketplace of researcher-built “agents,” including ones specifically for grant discovery—searchable by keywords like “grants” or “grant,” and filterable by grant type. Thesify takes a different approach: after uploading a paper, it returns a “grant” tab with matches tied to the paper’s themes, including opening dates and deadlines (with the caveat that older uploads may yield outdated listings). The practical point is to use these tools as an additional layer on top of traditional grant databases, not a replacement.
Drafting is treated as the next bottleneck, and AI is used to generate a grant application template-like output in one pass. Sispace’s “grant writer” agent is demonstrated using an Australian Research Council (ARC) grant prompt, with the agent pulling together credentials (via Google Scholar search), project framing, references, background, significance, research objectives, innovation, research streams, risk management, and budget justification. The output is described as long and reference-rich—around 8,500 words—useful as a starting point even if titles and budget details still require human editing. The emphasis is on speed and leverage: revising an AI-generated draft is easier than producing everything from scratch.
A second drafting tool, Grantable, is presented as more science- and research-oriented than many general-purpose AI grant writers. It includes quick actions for tasks like onboarding, research, refining, adding sources, and generating elements such as grant letters of intent and budget/narrative guidance. Grantable also has a “discover” feature (in beta) for grant discovery by topic, though it’s noted as US-centric.
The final—and most strategic—move is “reviewing the grant” using the grant’s own assessor or peer-review guide. The transcript recommends taking the official reviewer criteria (tables and scoring definitions) and feeding them into large language models such as ChatGPT, NotebookLM, Claude, or Gemini alongside the draft. The goal is to check whether each criterion is explicitly satisfied and made easy for reviewers to “tick off” quickly. A key warning is that criteria must be visible upfront in the application; if the reviewer can’t spot it, it won’t be credited. The overall message is that AI doesn’t just draft—it helps tailor the submission to the scoring rubric, turning grant writing from guesswork into a checklist-driven process.
Cornell Notes
The transcript lays out an AI-assisted grant-writing workflow: find suitable grants, generate a first draft, then score-check it against the official peer-review/assessor criteria. For discovery, Sispace offers grant-finding agents, while Thesify matches grants to a paper’s themes after upload, including deadlines and opening dates. For drafting, Sispace’s grant writer agent can produce a long, structured ARC-style application with background, objectives, innovation, research streams, risk management, and references—still requiring human edits. The most important step is using the grant’s reviewer guide as a rubric: large language models can compare the draft to each scoring element so reviewers can quickly see the evidence. This matters because explicit alignment with criteria can improve acceptance odds.
What’s the recommended end-to-end AI workflow for grant writing?
How does Sispace help with grant discovery and drafting?
How does Thesify connect a researcher’s work to specific grants?
Why is reviewing against the peer-review guide treated as the key advantage?
Which tools are suggested for the rubric-based review step?
Review Questions
- If a grant’s assessor guide includes a scoring table, what specific check should an applicant run using a large language model to avoid losing points?
- Compare Sispace and Thesify for grant discovery: what input does each tool use, and what output does each tool produce?
- What sections of a grant application does the transcript claim AI can generate quickly, and which parts still require human editing?
Key Points
- 1
Use a three-step AI workflow: discover grants, generate a first draft, then verify alignment with the official assessor/peer-review criteria.
- 2
Sispace can be used both to find grants via grant-focused agents and to draft grant applications through a grant writer agent.
- 3
Thesify matches grants to a researcher’s uploaded paper and returns likely-fit opportunities with match percentages and deadline information.
- 4
AI-generated drafts can produce structured, reference-rich grant content (including executive summaries, objectives, innovation, and risk management), but titles and budget details still need human revision.
- 5
Grantable is positioned as a research-oriented grant drafting tool with actions for refining narratives and adding sources, plus a “discover” feature (noted as US-centric).
- 6
The highest-leverage step is rubric checking: feed the assessor guide tables into a large language model alongside the draft to confirm each criterion is explicitly addressed and visible to reviewers.