Get AI summaries of any video or article — Sign up free
Research With ChatGPT - How to find the Questionnaire using #ChatGPT or Google #Bard thumbnail

Research With ChatGPT - How to find the Questionnaire using #ChatGPT or Google #Bard

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use focused prompts to have AI suggest candidate questionnaires and provide citations, then verify every citation in Google Scholar.

Briefing

Finding validated questionnaires for research variables is where ChatGPT and Bard can help—but only if the results get checked against original papers. In a study about servant leadership and project success, the workflow starts with prompting an AI tool to “find me a questionnaire measure or scale for servant leadership.” The AI returns a scoring scale and, crucially, a citation—Len Skill is provided as a reference in the first attempt—so the next step becomes verifying the scale in the source document rather than trusting the AI output.

That verification step quickly reveals a common mismatch: the AI may summarize a servant leadership instrument as a single dimension, while the original measure actually uses multiple subdimensions. The transcript highlights this by comparing the AI’s simplified structure with a paper that contains 10 items and subdimensions. The practical takeaway is that researchers should not only confirm the number of items but also confirm the instrument’s dimensionality and how constructs are operationalized.

When the initial scale doesn’t align with the paper’s structure, the prompt gets adjusted to request additional instruments. New prompts ask for “additional questionnaires” and list alternative options such as the Servant Leadership Assessment Instrument, the Servant Leadership Scale, the Servant Leadership Questionnaire, and the Servant Leadership Behavior Scale. Even then, the same rule applies: copy the instrument name into Google Scholar, open the original paper, and confirm that the items and references match what the AI provided.

The process then expands from retrieving questionnaires to retrieving definitions. Prompts request the definitions used in each scale, and the AI returns construct definitions for servant leadership. The transcript stresses that these definitions must be traced back to the exact locations in the original papers—where the statements appear—so researchers can accurately report how the construct is defined and measured.

Bard is tested in parallel using similar prompts, with the transcript noting that prompt wording may need to change when results are incomplete or off-target. The comparison is less about which tool is “better” in the abstract and more about whether the output can be validated and used.

The same method is applied to the second variable: project success. An AI prompt requests a “scale/questionnaire to measure project success,” returning a “Project Success Assessment Scale” attributed to B and guv (spelling as shown). Google Scholar verification shows the paper exists and includes items, but the AI’s item count may differ from the paper (the transcript notes nine items in the source versus seven items captured in the AI output). The session closes with a methodological warning: AI tools can accelerate discovery of instruments, but they cannot replace the understanding gained from reading research papers. Conceptualization and operationalization come from the literature, and the researcher remains responsible for the final write-up and decisions.

Cornell Notes

The session demonstrates a repeatable method for using ChatGPT and Bard to locate research questionnaires for constructs like servant leadership and project success. AI can generate candidate scales, scoring formats, and citations, but the workflow requires validation in the original paper via Google Scholar. A key lesson is that AI outputs may simplify dimensionality or misreport item counts, so researchers must confirm the number of items and subdimensions directly in the source. The transcript also shows how to use AI to retrieve construct definitions, then verify where those definitions appear in the paper. Ultimately, AI acts as an assistant for discovery, while conceptualization and operationalization come from reading the literature.

How does the workflow start when searching for a servant leadership questionnaire?

It begins with a targeted prompt such as: “find me a questionnaire measure or scale for Servant leadership.” The AI returns a scoring scale and a reference, after which the researcher checks the citation in Google Scholar and opens the original paper to confirm the instrument’s details.

What mismatch can occur between AI output and the original servant leadership instrument?

The transcript notes that AI may present the scale as single-dimensional, while the original instrument uses multiple subdimensions. Verification in the paper also checks item counts—for example, the original source is described as having 10 items.

Why are follow-up prompts used after the first servant leadership scale attempt?

Because initial results may be incomplete, structurally inconsistent, or insufficient for the researcher’s needs. The transcript shows prompt changes to request additional questionnaires/instruments (e.g., Servant Leadership Assessment Instrument, Servant Leadership Scale, Servant Leadership Questionnaire, Servant Leadership Behavior Scale) and then validate each in the original paper.

How are definitions for constructs handled in this process?

AI can provide definitions used in the scales, but the transcript emphasizes tracing those statements back to the original references. Researchers should locate where the definitions appear in the paper and read the source text before using the definitions in their write-up.

What happens when the same approach is applied to project success?

A prompt asks for a project success scale/questionnaire, and AI provides a candidate instrument and citation (labeled as “Project Success Assessment Scale” by B and guv). Google Scholar verification confirms the paper exists, but the transcript highlights that item counts may differ between AI output and the paper (nine items in the source versus seven captured in the AI output).

What is the role of reading research papers in this workflow?

Reading is presented as essential for understanding conceptualization and operationalization. AI can speed up instrument discovery, but the researcher must make the final decisions and produce accurate write-ups based on direct engagement with the literature.

Review Questions

  1. When AI returns a servant leadership scale with a citation, what specific checks should be performed in the original paper?
  2. Give two examples of ways AI output can differ from the source instrument (e.g., dimensionality or item count).
  3. Why does the transcript insist that conceptualization and operationalization require reading research papers rather than relying on AI output?

Key Points

  1. 1

    Use focused prompts to have AI suggest candidate questionnaires and provide citations, then verify every citation in Google Scholar.

  2. 2

    Confirm not just the instrument name but also the item count and dimensional structure (single dimension vs subdimensions) in the original paper.

  3. 3

    If the first AI result doesn’t match the source, revise the prompt to request additional instruments and repeat the validation step.

  4. 4

    Treat AI-provided construct definitions as leads; locate the exact wording in the original references before using them in a study.

  5. 5

    When measuring multiple variables, apply the same discovery-and-validation workflow to each construct (e.g., servant leadership and project success).

  6. 6

    Expect discrepancies in AI outputs such as missing items or incorrect item counts, and resolve them by checking the source document.

  7. 7

    Use AI as an assistant for speed, but rely on the literature for conceptualization, operationalization, and final write-up decisions.

Highlights

AI can quickly surface candidate scales and citations, but original-paper verification is mandatory to confirm items and structure.
A common failure mode is dimensionality drift: AI may describe a single-dimension scale while the source uses multiple subdimensions.
Even when a paper is found, AI may misreport item counts—like the project success instrument where the source is described as having nine items versus seven captured by AI output.

Topics