Get AI summaries of any video or article — Sign up free
Researchers Beware: Avoid These Costly Mistakes When Using AI thumbnail

Researchers Beware: Avoid These Costly Mistakes When Using AI

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use structured prompts that specify a role, a concrete task, and a required output format to avoid generic results.

Briefing

The biggest avoidable mistake with AI in research is treating it like a mind-reader—using vague prompts and relying on the model’s default knowledge instead of giving it a clear role, a specific task, and an output format. That mismatch leads to generic, poorly structured results that still look polished, which can waste time and create downstream errors in writing, citations, and submissions.

Prompting works best when it follows a simple, repeatable structure: assign the AI a role, define the task, and specify the format for the deliverable. For example, instead of asking broadly for “a literature review,” a stronger prompt tells the AI to act as a scientist, outlines the exact task (e.g., “outline a good literature review” on a defined topic), and demands a structured output such as headings. The format matters because it tells the system what “done” looks like.

A second high-impact technique is linked prompting—building the output step by step rather than requesting the entire final product at once. Instead of asking for a full literature review in one go, the process starts with headings, then requests subheadings, then asks for a defined word count under each subheading with citations. If the AI drifts or misunderstands at any stage, the workflow allows stopping and reissuing the request with a revised prompt. This approach mirrors how researchers actually work: outline first, then expand with evidence.

Beyond prompting, the transcript warns against leaning on the base model’s built-in knowledge for research tasks. A more reliable strategy is to feed the AI your own data—such as PDFs, websites, or other materials—so it can generate outputs grounded in your specific sources. That requires upfront effort to build an internal “database” inside the AI session, but the payoff is more unique, tailored results that function like a personal assistant for your own information.

Still, AI is not a universal research engine. The transcript draws a clear line between strengths and weaknesses: general chat-based models are strong at language tasks like drafting concise paragraphs and rewriting text, but they don’t perform literature searches well. For literature discovery and mapping, researchers should use purpose-built tools such as elicit.org, Lit Maps, Connected Papers, and Research Rabbit, then combine tools based on each one’s strengths.

Finally, responsibility doesn’t transfer to AI. Any text produced—especially for peer review, courses, or coursework—must be read, edited, and checked. Citations and references must be verified for relevance and accuracy, because AI can produce plausible-looking but incorrect or mismatched references. The core takeaway is practical: use AI as a tool, not an authority; guide it with structured, stepwise prompts; ground it in your own sources; and verify everything before submission.

Cornell Notes

AI use in research goes wrong most often when prompts are vague and when users expect the model to “read their mind.” Better results come from structured prompting: specify a role, a concrete task, and an output format (e.g., headings for a literature review). Linked prompting improves reliability by building the work in stages—headings first, then subheadings, then a set word count per section with citations—re-prompting when the model misses the mark. For stronger accuracy, ground outputs in the user’s own materials by feeding PDFs or websites into the system. Even then, the researcher remains responsible for editing and verifying every claim and citation before submission.

Why do non-specific prompts lead to costly mistakes in research writing?

Non-specific prompts encourage generic outputs that may look coherent but don’t match the exact structure, scope, or evidence needed for a real literature review. The transcript emphasizes that AI won’t automatically infer the intended layout or the specific deliverable. Without a clear role, task, and required format, the model tends to produce results that are harder to verify and more likely to contain citation mismatches.

What does a “role-task-format” prompt look like, and why does it matter?

The recommended structure is: (1) role (e.g., “as a scientist”), (2) task (e.g., “outline a good literature review” on a defined topic like organic photovoltaic devices), and (3) format (e.g., “give it to me in headings”). This combination gives the model a concrete job and a measurable output structure, making the results easier to use and revise.

How does linked prompting reduce errors compared with asking for a full draft at once?

Linked prompting breaks the deliverable into steps that mirror researcher workflow. First request headings, then subheadings, then ask for a fixed word count under each subheading with citations. If the AI fails to follow instructions at any step, the user can stop and re-prompt with a corrected structure. This staged approach limits drift and makes it easier to catch problems early.

Why is feeding the AI your own data more reliable than relying on the base model?

The transcript argues that the most powerful use comes from providing your own inputs—like PDFs or websites—so the AI generates outputs grounded in your materials rather than the model’s general training knowledge. That requires time to build an internal working set, but it produces more unique, specific results and supports a more assistant-like workflow for your particular research.

What’s the division of labor between chat-based AI and research-specific discovery tools?

Chat-based models are portrayed as strong at language tasks: drafting, rewriting, and producing concise paragraphs from provided information. They’re described as weaker at literature search and discovery. For literature mapping and search, the transcript points to tools designed for researchers—elicit.org, Lit Maps, Connected Papers, and Research Rabbit—then recommends combining tools based on each tool’s strengths.

What responsibility remains with the researcher even after AI generates text and citations?

The transcript is explicit: users are responsible for everything AI outputs, especially for peer review or coursework. That means reading and editing the draft, and verifying that every reference and citation is appropriate and accurate. It warns that AI can generate plausible citations that may be irrelevant or incorrect, so checks are mandatory.

Review Questions

  1. What three elements should a prompt include to produce more usable research outputs?
  2. Describe linked prompting and explain how it changes the workflow compared with requesting a full literature review in one step.
  3. What verification steps must be taken before submitting AI-generated writing for peer review or coursework?

Key Points

  1. 1

    Use structured prompts that specify a role, a concrete task, and a required output format to avoid generic results.

  2. 2

    Adopt linked prompting by building outputs in stages (headings → subheadings → section text with citations) and re-prompt when instructions are missed.

  3. 3

    Stop expecting AI to “read your mind”; guide it with explicit instructions for what the final deliverable should look like.

  4. 4

    Ground AI outputs in your own materials (e.g., PDFs or websites) instead of relying on the model’s base knowledge.

  5. 5

    Choose tools by strength: use chat-based AI for language drafting and research-specific tools for literature search and mapping.

  6. 6

    Treat AI as a tool, not an authority—always edit and verify claims and citations before submission.

  7. 7

    Assume responsibility for every reference: confirm that citations match the content and are appropriate for the research context.

Highlights

Vague prompts and mind-reading expectations are a fast route to generic, hard-to-verify research writing.
Linked prompting—headings, then subheadings, then fixed-length sections with citations—improves control and reduces drift.
AI can draft well, but literature discovery often requires purpose-built tools like elicit.org, Lit Maps, Connected Papers, and Research Rabbit.
Even with AI assistance, researchers must verify every citation and edit the output before peer review or coursework submission.