Researchers Beware: Avoid These Costly Mistakes When Using AI
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use structured prompts that specify a role, a concrete task, and a required output format to avoid generic results.
Briefing
The biggest avoidable mistake with AI in research is treating it like a mind-reader—using vague prompts and relying on the model’s default knowledge instead of giving it a clear role, a specific task, and an output format. That mismatch leads to generic, poorly structured results that still look polished, which can waste time and create downstream errors in writing, citations, and submissions.
Prompting works best when it follows a simple, repeatable structure: assign the AI a role, define the task, and specify the format for the deliverable. For example, instead of asking broadly for “a literature review,” a stronger prompt tells the AI to act as a scientist, outlines the exact task (e.g., “outline a good literature review” on a defined topic), and demands a structured output such as headings. The format matters because it tells the system what “done” looks like.
A second high-impact technique is linked prompting—building the output step by step rather than requesting the entire final product at once. Instead of asking for a full literature review in one go, the process starts with headings, then requests subheadings, then asks for a defined word count under each subheading with citations. If the AI drifts or misunderstands at any stage, the workflow allows stopping and reissuing the request with a revised prompt. This approach mirrors how researchers actually work: outline first, then expand with evidence.
Beyond prompting, the transcript warns against leaning on the base model’s built-in knowledge for research tasks. A more reliable strategy is to feed the AI your own data—such as PDFs, websites, or other materials—so it can generate outputs grounded in your specific sources. That requires upfront effort to build an internal “database” inside the AI session, but the payoff is more unique, tailored results that function like a personal assistant for your own information.
Still, AI is not a universal research engine. The transcript draws a clear line between strengths and weaknesses: general chat-based models are strong at language tasks like drafting concise paragraphs and rewriting text, but they don’t perform literature searches well. For literature discovery and mapping, researchers should use purpose-built tools such as elicit.org, Lit Maps, Connected Papers, and Research Rabbit, then combine tools based on each one’s strengths.
Finally, responsibility doesn’t transfer to AI. Any text produced—especially for peer review, courses, or coursework—must be read, edited, and checked. Citations and references must be verified for relevance and accuracy, because AI can produce plausible-looking but incorrect or mismatched references. The core takeaway is practical: use AI as a tool, not an authority; guide it with structured, stepwise prompts; ground it in your own sources; and verify everything before submission.
Cornell Notes
AI use in research goes wrong most often when prompts are vague and when users expect the model to “read their mind.” Better results come from structured prompting: specify a role, a concrete task, and an output format (e.g., headings for a literature review). Linked prompting improves reliability by building the work in stages—headings first, then subheadings, then a set word count per section with citations—re-prompting when the model misses the mark. For stronger accuracy, ground outputs in the user’s own materials by feeding PDFs or websites into the system. Even then, the researcher remains responsible for editing and verifying every claim and citation before submission.
Why do non-specific prompts lead to costly mistakes in research writing?
What does a “role-task-format” prompt look like, and why does it matter?
How does linked prompting reduce errors compared with asking for a full draft at once?
Why is feeding the AI your own data more reliable than relying on the base model?
What’s the division of labor between chat-based AI and research-specific discovery tools?
What responsibility remains with the researcher even after AI generates text and citations?
Review Questions
- What three elements should a prompt include to produce more usable research outputs?
- Describe linked prompting and explain how it changes the workflow compared with requesting a full literature review in one step.
- What verification steps must be taken before submitting AI-generated writing for peer review or coursework?
Key Points
- 1
Use structured prompts that specify a role, a concrete task, and a required output format to avoid generic results.
- 2
Adopt linked prompting by building outputs in stages (headings → subheadings → section text with citations) and re-prompt when instructions are missed.
- 3
Stop expecting AI to “read your mind”; guide it with explicit instructions for what the final deliverable should look like.
- 4
Ground AI outputs in your own materials (e.g., PDFs or websites) instead of relying on the model’s base knowledge.
- 5
Choose tools by strength: use chat-based AI for language drafting and research-specific tools for literature search and mapping.
- 6
Treat AI as a tool, not an authority—always edit and verify claims and citations before submission.
- 7
Assume responsibility for every reference: confirm that citations match the content and are appropriate for the research context.