Get AI summaries of any video or article — Sign up free
10Min Research - 29 - Points to Consider when Designing a Research Questionnaire thumbnail

10Min Research - 29 - Points to Consider when Designing a Research Questionnaire

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Treat each variable as a latent construct that must be measured through an instrument made of questionnaire items.

Briefing

Designing a research questionnaire starts with treating each variable as a “latent construct” that must be measured through an instrument—typically a set of questionnaire items. Those items don’t just collect opinions; they operationalize abstract concepts so the hypothesis can be tested with structural equation modeling tools such as Smart PLS, SEM software, or SEM packages in R (including lavaan). Because items can be dropped during analysis, the questionnaire needs enough indicators per construct—at least three items for each construct—to keep the measurement model usable even after deletions.

The next critical checkpoint is alignment between conceptualization and operationalization. A construct’s definition in the literature must match what the questionnaire actually measures. For example, if corporate social responsibility (CSR) is conceptualized as a multi-dimensional construct covering economic, legal, ethical, and philanthropic responsibilities, but the questionnaire includes items only for the philanthropic/discretionary dimension, reviewers can flag the mismatch. That inconsistency undermines the credibility of the measurement because the instrument captures a different construct than the one described in the theory.

Once alignment is secured, the instrument must be both reliable and valid. A practical way to improve these properties is to draw items and scales from reputable journals—specifically journals listed in the Master Journal List referenced in earlier discussion. Reliability and validity aren’t just academic labels; they determine whether the items consistently measure the intended construct and whether they measure what they claim to measure.

Question clarity also matters. Items should be easy to understand for respondents; confusing wording can distort responses and damage data quality. When using existing questionnaires, researchers should distinguish between “adopt” and “adapt.” Adoption means using the questionnaire as-is with no changes. Adaptation means making small wording adjustments to improve comprehension or fit the respondent profile while keeping the core essence of the items intact.

Finally, the response scale must fit the analysis method. Dichotomous answers such as yes/no (or other non-metric formats like “yes/no” to a trust question) can create problems in SEM because they don’t behave like metric data. Instead, the guidance favors Likert-type scales—such as strongly disagree to strongly agree, or strongly dissatisfied to strongly satisfied—so each construct is measured through multiple items on a scale that supports quantitative modeling. The overall message is straightforward: questionnaire design is not a formality. It is the measurement foundation that determines whether the hypothesis test rests on sound, analyzable constructs.

Cornell Notes

A research questionnaire is the instrument used to measure latent variables so a hypothesis can be tested. Because structural equation modeling can delete weak items, each construct should have enough indicators—at least three items per construct—to remain analyzable. Conceptualization and operationalization must align: if CSR is defined across economic, legal, ethical, and philanthropic dimensions, the questionnaire must measure those dimensions rather than only one. Instruments should use reliable and valid scales, ideally drawn from reputable journals, and items must be easy for respondents to understand. For SEM, response formats should avoid yes/no answers and instead use Likert-type scales (e.g., strongly disagree to strongly agree) with multiple items per construct.

Why does questionnaire design matter before collecting data for hypothesis testing?

Questionnaires operationalize abstract (latent) constructs into measurable items. Each variable needs its own set of items so the measurement model can represent the constructs during structural equation modeling (e.g., Smart PLS or SEM in R/lavaan). Without proper measurement, the hypothesis test rests on constructs that aren’t actually captured by the data.

How many items should a construct include, and why?

At least three items per construct. In SEM workflows, items can be deleted during analysis if they don’t perform well, so starting with too few indicators risks leaving the construct with insufficient measurement coverage.

What goes wrong when conceptualization and operationalization don’t match?

A mismatch creates a measurement validity problem. Example: CSR is conceptualized as multi-dimensional (economic, legal, ethical, philanthropic), but the questionnaire includes items only for philanthropic/discretionary behavior. Reviewers can then argue the instrument measures a different (narrower) construct than the one defined in the theory.

What’s the difference between adopting and adapting a questionnaire?

Adoption means using the questionnaire exactly as it is, with no changes. Adaptation means changing wording slightly to improve clarity or fit the respondent profile while keeping the essence of the items the same.

Why are yes/no response scales risky for SEM?

Yes/no answers are dichotomous and not metric, which can complicate SEM analysis. The guidance favors Likert-type scales (e.g., strongly disagree to strongly agree, or strongly dissatisfied to strongly satisfied) so responses behave like scaled data suitable for modeling.

How can researchers improve reliability and validity when building a questionnaire?

Use established scales from reputable sources—specifically journals listed in the Master Journal List mentioned earlier. This increases the chance that the items have been tested for reliability and validity, rather than inventing measures without evidence.

Review Questions

  1. What minimum number of items per construct is recommended for SEM, and how does item deletion affect this choice?
  2. Give an example of a conceptualization–operationalization mismatch and explain why it would be criticized.
  3. Why does the choice of response scale (e.g., Likert vs yes/no) matter for structural equation modeling?

Key Points

  1. 1

    Treat each variable as a latent construct that must be measured through an instrument made of questionnaire items.

  2. 2

    Use at least three items per construct to reduce the risk that SEM item deletion leaves the construct under-measured.

  3. 3

    Ensure conceptualization and operationalization align; the questionnaire must measure the same dimensions defined in the literature.

  4. 4

    Improve reliability and validity by drawing from established scales in reputable journals, including those listed in the Master Journal List.

  5. 5

    Write items that respondents can easily understand to prevent data-quality problems.

  6. 6

    Adopt questionnaires as-is or adapt them with minimal wording changes while preserving the items’ core meaning.

  7. 7

    Prefer Likert-type (metric) response scales over dichotomous yes/no formats for SEM analysis.

Highlights

Questionnaires are the measurement instruments that convert latent constructs into analyzable items for SEM.
At least three items per construct are recommended because SEM can delete items during analysis.
A CSR example shows how conceptualization–operationalization mismatch can invalidate measurement.
Likert-type scales are preferred over yes/no answers because dichotomous responses create SEM analysis issues.

Topics

Mentioned

  • SEM
  • PLS
  • CSR
  • SC