Get AI summaries of any video or article — Sign up free
How to develop an Interview Guide with ChatGPT (3 strategies) thumbnail

How to develop an Interview Guide with ChatGPT (3 strategies)

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use ChatGPT to generate large pools of candidate interview questions, but start from your own written core research questions first.

Briefing

Qualitative interview guides can be upgraded quickly by using ChatGPT in two targeted ways: generating large pools of candidate questions and steering those questions with psychological theory—especially approaches tied to cognitive interviewing and memory retrieval. The payoff is practical: researchers can brainstorm faster, then refine prompts into questions that help participants stay cognitively engaged and recall concrete experiences they might otherwise struggle to remember.

A common starting point is to let ChatGPT draft an entire guide from a broad study description—such as a hypothetical project on nurses’ experiences during the pandemic, focusing on barriers and how they were overcome. That “open” method can produce a usable structure and introduction, but it often lands on generic or mismatched details, and the output may vary from day to day. In the presenter’s experience, the generated guide can include categories that look plausible yet are hard to justify for the specific study focus, meaning researchers still need to supply their own logic for what matters and why.

Instead of relying on a single all-at-once draft, the recommended workflow begins with the study’s key questions—here, “What barriers did nurses face during the pandemic?” and “How did they manage to overcome these barriers?” Those core questions are written down first, then researchers brainstorm their own additional question ideas. ChatGPT is then used to expand that brainstorming: one question at a time, with a detailed prompt asking for many options that account for memory limitations and encourage multiple perspectives. This approach helps overcome the real-world constraint that researchers can only generate so many ideas and theme groupings before they start repeating themselves.

The second strategy uses theory to improve question quality. Rather than treating interview questions as purely creative prompts, ChatGPT can be asked to draw on psychological theories relevant to participant recall and engagement. The transcript highlights cognitive interviewing—an approach rooted in memory retrieval concepts and influenced by techniques used in police interrogations. The goal is to design questions that prompt detailed recollection, reducing the chance that participants omit key events simply because they cannot retrieve them.

Because theory suggestions can be inconsistent across runs, researchers are encouraged to treat ChatGPT as a direction-finder, not an authority. It may propose memory retrieval theory, resilience theory, or other cognitive frameworks; some will fit, others won’t. Researchers then select the most relevant theory and ask ChatGPT to generate question ideas grounded in that framework.

The end result is a more “digging deeper” interview guide: questions become more specific and experience-focused, such as prompts that ask participants to walk through a particular time window of a memorable shift. The transcript repeatedly stresses a boundary: ChatGPT should not replace the researcher’s judgment, dissertation writing, or full analysis. Used carefully, it can still save substantial time and help make interview guides more professional by aligning them with established theory and better recall-oriented questioning.

Cornell Notes

ChatGPT can help build qualitative interview guides faster and more rigorously when used in two ways: (1) generate many candidate questions for the study’s core topics, and (2) shape those questions using psychological theories tied to recall and cognitive engagement. A broad “draft the whole guide” prompt can offer structure but often produces generic or hard-to-justify categories, and outputs can vary across runs. A better workflow starts with the researcher’s own key questions, then uses ChatGPT to expand brainstorming with prompts that account for participants’ memory limits and encourage multiple perspectives. For question quality, ChatGPT can suggest theory-informed approaches—particularly cognitive interviewing and memory retrieval concepts—to elicit detailed recollections (e.g., walking through a specific hour of a shift).

Why does a broad prompt that asks ChatGPT to “develop an interview guide” often fall short for qualitative research?

It tends to generate lots of plausible but not necessarily study-relevant content. In the example about nurses during the pandemic, the draft included categories like work-life balance, training, preparedness, and physical/mental health. Even when the structure looks good, the researcher still needs to justify why those specific groupings are the right fit for the study’s aims. The output can also change from day to day, meaning it may not consistently match the researcher’s intended focus.

What workflow improves interview-guide development beyond letting ChatGPT draft everything at once?

Start by writing the study’s key questions first (e.g., barriers nurses faced during the pandemic, and how they overcame them). Then brainstorm additional questions yourself, and use ChatGPT to expand that pool. The transcript emphasizes prompting ChatGPT one question at a time—asking for many ideas while accounting for participants’ memory challenges and encouraging different perspectives—so the final guide grows from the researcher’s logic rather than from an AI-generated structure.

How does theory use change the purpose of interview questions?

Theory turns questions into tools for cognitive engagement and recall, not just topic coverage. The transcript highlights cognitive interviewing, which is designed to help participants retrieve memories by keeping them engaged cognitively. ChatGPT can be asked to propose theories and then generate questions tied to those theories, aiming to reduce omissions caused by participants’ inability to remember.

What kinds of theories does ChatGPT suggest, and how should researchers handle them?

The transcript notes that ChatGPT may suggest memory retrieval theory, resilience theory, and other cognitive frameworks. Because the theory list can vary across runs, researchers should treat suggestions as starting points. They then select the theories that fit the study and ask ChatGPT to develop questions further based on the chosen framework.

What does a more recall-oriented, theory-informed question look like in practice?

Instead of staying general, questions become time-anchored and detail-seeking. The transcript gives an example: asking a participant to “walk me through the last hour of a memorable shift.” Such specificity supports memory retrieval by narrowing the recall target to a concrete episode.

What guardrails keep ChatGPT from replacing researcher judgment?

The transcript repeatedly warns against over-reliance: ChatGPT should not write dissertations, fully solve the research problem, or conduct the analysis. It’s best used to save time and provide direction—especially for brainstorming and theory-linked question development—while the researcher remains responsible for selecting, justifying, and refining what fits the study.

Review Questions

  1. When would it be better to prompt ChatGPT for many question ideas rather than asking it to draft the entire interview guide at once?
  2. How does cognitive interviewing relate to memory retrieval, and why does that matter for qualitative recall?
  3. What steps would you take to ensure ChatGPT’s suggested categories or theories are justifiable for your specific study focus?

Key Points

  1. 1

    Use ChatGPT to generate large pools of candidate interview questions, but start from your own written core research questions first.

  2. 2

    Avoid accepting an AI-generated full interview guide without checking whether each category is defensible for the study’s aims.

  3. 3

    Prompt ChatGPT with detailed constraints (e.g., participants may struggle to remember, and questions should draw on multiple perspectives).

  4. 4

    Use theory to improve recall and engagement—especially cognitive interviewing and memory retrieval concepts.

  5. 5

    Treat ChatGPT’s theory suggestions as variable leads; select the theories that fit and refine questions accordingly.

  6. 6

    Keep researcher judgment in charge: ChatGPT should assist with brainstorming and direction, not replace analysis or dissertation-level work.

  7. 7

    Iterate by asking follow-up prompts that deepen specific areas (e.g., expand questions grounded in a chosen theory).

Highlights

A broad “draft the whole guide” prompt can produce structure, but it often adds categories that are hard to justify and may vary across runs.
The most reliable workflow starts with the researcher’s key questions, then uses ChatGPT to expand brainstorming rather than replace it.
Theory-linked questioning—especially cognitive interviewing and memory retrieval—aims to make participants more cognitively engaged and better able to recall details.
Specific, experience-focused prompts (like walking through the last hour of a shift) can outperform generic questions for eliciting concrete recollections.

Topics

Mentioned