Research With ChatGPT - How to find the Questionnaire using #ChatGPT or Google #Bard
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use focused prompts to have AI suggest candidate questionnaires and provide citations, then verify every citation in Google Scholar.
Briefing
Finding validated questionnaires for research variables is where ChatGPT and Bard can help—but only if the results get checked against original papers. In a study about servant leadership and project success, the workflow starts with prompting an AI tool to “find me a questionnaire measure or scale for servant leadership.” The AI returns a scoring scale and, crucially, a citation—Len Skill is provided as a reference in the first attempt—so the next step becomes verifying the scale in the source document rather than trusting the AI output.
That verification step quickly reveals a common mismatch: the AI may summarize a servant leadership instrument as a single dimension, while the original measure actually uses multiple subdimensions. The transcript highlights this by comparing the AI’s simplified structure with a paper that contains 10 items and subdimensions. The practical takeaway is that researchers should not only confirm the number of items but also confirm the instrument’s dimensionality and how constructs are operationalized.
When the initial scale doesn’t align with the paper’s structure, the prompt gets adjusted to request additional instruments. New prompts ask for “additional questionnaires” and list alternative options such as the Servant Leadership Assessment Instrument, the Servant Leadership Scale, the Servant Leadership Questionnaire, and the Servant Leadership Behavior Scale. Even then, the same rule applies: copy the instrument name into Google Scholar, open the original paper, and confirm that the items and references match what the AI provided.
The process then expands from retrieving questionnaires to retrieving definitions. Prompts request the definitions used in each scale, and the AI returns construct definitions for servant leadership. The transcript stresses that these definitions must be traced back to the exact locations in the original papers—where the statements appear—so researchers can accurately report how the construct is defined and measured.
Bard is tested in parallel using similar prompts, with the transcript noting that prompt wording may need to change when results are incomplete or off-target. The comparison is less about which tool is “better” in the abstract and more about whether the output can be validated and used.
The same method is applied to the second variable: project success. An AI prompt requests a “scale/questionnaire to measure project success,” returning a “Project Success Assessment Scale” attributed to B and guv (spelling as shown). Google Scholar verification shows the paper exists and includes items, but the AI’s item count may differ from the paper (the transcript notes nine items in the source versus seven items captured in the AI output). The session closes with a methodological warning: AI tools can accelerate discovery of instruments, but they cannot replace the understanding gained from reading research papers. Conceptualization and operationalization come from the literature, and the researcher remains responsible for the final write-up and decisions.
Cornell Notes
The session demonstrates a repeatable method for using ChatGPT and Bard to locate research questionnaires for constructs like servant leadership and project success. AI can generate candidate scales, scoring formats, and citations, but the workflow requires validation in the original paper via Google Scholar. A key lesson is that AI outputs may simplify dimensionality or misreport item counts, so researchers must confirm the number of items and subdimensions directly in the source. The transcript also shows how to use AI to retrieve construct definitions, then verify where those definitions appear in the paper. Ultimately, AI acts as an assistant for discovery, while conceptualization and operationalization come from reading the literature.
How does the workflow start when searching for a servant leadership questionnaire?
What mismatch can occur between AI output and the original servant leadership instrument?
Why are follow-up prompts used after the first servant leadership scale attempt?
How are definitions for constructs handled in this process?
What happens when the same approach is applied to project success?
What is the role of reading research papers in this workflow?
Review Questions
- When AI returns a servant leadership scale with a citation, what specific checks should be performed in the original paper?
- Give two examples of ways AI output can differ from the source instrument (e.g., dimensionality or item count).
- Why does the transcript insist that conceptualization and operationalization require reading research papers rather than relying on AI output?
Key Points
- 1
Use focused prompts to have AI suggest candidate questionnaires and provide citations, then verify every citation in Google Scholar.
- 2
Confirm not just the instrument name but also the item count and dimensional structure (single dimension vs subdimensions) in the original paper.
- 3
If the first AI result doesn’t match the source, revise the prompt to request additional instruments and repeat the validation step.
- 4
Treat AI-provided construct definitions as leads; locate the exact wording in the original references before using them in a study.
- 5
When measuring multiple variables, apply the same discovery-and-validation workflow to each construct (e.g., servant leadership and project success).
- 6
Expect discrepancies in AI outputs such as missing items or incorrect item counts, and resolve them by checking the source document.
- 7
Use AI as an assistant for speed, but rely on the literature for conceptualization, operationalization, and final write-up decisions.