Get AI summaries of any video or article — Sign up free
LESSON 39 - QUESTIONNAIRES: TYPES OF INFORMATION & DESIGNS OF CONSTRUCTING A QUESTIONNAIRE thumbnail

LESSON 39 - QUESTIONNAIRES: TYPES OF INFORMATION & DESIGNS OF CONSTRUCTING A QUESTIONNAIRE

5 min read

Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A questionnaire’s effectiveness depends on response rate, completion rate, and response validity—returned, fully answered, and honest responses.

Briefing

A well-designed questionnaire depends less on clever wording and more on getting usable data: enough questionnaires come back, respondents finish them fully, and the answers are honest and accurate. That practical focus—response rate, completion rate, and response validity—drives every design choice, because weak returns or incomplete/invalid responses can’t produce credible findings for the research problem.

A questionnaire itself is defined as a set of standardized questions (items) meant to collect information from respondents. It can be self-administered, with respondents completing it independently, or administered by reading the questions aloud—when that happens, the instrument functions more like an interview schedule. The lesson also draws a clear line between “questions of research” (the questionnaire items and other instrument questions) and “research questions” (the investigative questions meant to solve the research problem). Confusing the two leads to misaligned instruments.

The questionnaire must target specific types of information tied to the study’s variables and conceptual framework. First are demographic information items—background details such as gender, marital status, education, and age—only when those details are relevant to the research problem. Second are knowledge questions that measure what respondents know about an issue; for example, assessing consumer awareness of counterfeit drugs. Third are attitude items, which capture liking or disliking toward a concept and are typically measured using a Likert scale. Fourth are self-perception questions that ask respondents to evaluate their own behavior or ability relative to others, such as rating their ability to identify counterfeit drugs; that’s not knowledge, but perceived competence.

Designing the questionnaire involves choosing between closed-ended and open-ended formats. Closed-ended questions provide prepared lists of concrete questions and response options, making analysis easier but risking bias because respondents must select from researcher-provided alternatives. Adding an “other” option can reduce forced-choice problems. Open-ended questions let respondents express thoughts freely, such as asking for recommendations to improve online learning.

To keep questionnaires effective, the lesson sets out do’s and don’ts for question construction. General rules include aligning every item with the research questions and conceptual framework, ensuring the question clearly states what information is needed, and testing clarity by asking what the respondent would understand before fielding it. Specific rules emphasize plain language, avoiding double-barreled items (questions that demand two answers at once), using neutral and non-offensive wording, and minimizing negative wording that can confuse interpretation. The lesson also warns against response-set pitfalls, where respondents detect patterns and answer consistently in the same direction; staggering items can help. Rating scales should be used consistently across the instrument, questionnaires should be brief and avoid duplication (often refined through piloting), and vague frequency terms like “often” or “regularly” should be replaced with actual frequency measures. Finally, questions should be specific rather than overly general, and items should not combine two assumptions or two questions in one prompt.

Overall, the lesson frames questionnaire construction as a chain: define the needed information types, choose question designs, and apply strict wording rules so the resulting instrument produces returned, completed, and valid data for answering the research problem. Next steps point toward demonstrating how to develop questionnaire items.

Cornell Notes

A questionnaire is a standardized set of items designed to collect information from respondents, either self-administered or read aloud as an interview schedule. Its success depends on response rate (returned questionnaires), completion rate (fully answered items), and response validity (honest, accurate answers), because weak data can’t support solutions to the research problem. The instrument should seek information aligned to the study’s variables and conceptual framework, typically including demographic information, knowledge, attitudes (often via Likert scales), and self-perception. Questionnaire design can use closed-ended questions (pre-set options) or open-ended questions (free responses), each with tradeoffs. Effective construction requires clear, simple, neutral, single-idea questions with consistent rating scales and unambiguous wording, avoiding double-barreled items, confusing negatives, and vague frequency terms.

What makes a questionnaire “successful” in practical research terms?

Success is measured through three linked outcomes: response rate (how many questionnaires are returned), completion rate (how many are fully completed with every item answered), and response validity (how honest and accurate the responses are). If questionnaires aren’t returned, aren’t completed, or contain invalid answers, the study can’t generate credible results for the research problem.

How do “questions of research” differ from “research questions,” and why does that matter for questionnaire design?

“Questions of research” are the questionnaire items—the actual prompts used in the instrument. “Research questions” are the investigative questions meant to address the research problem. Questionnaire items must be built to answer the research questions; mixing up the two leads to irrelevant items that don’t map onto the study’s conceptual framework.

What four types of information should questionnaires seek, and how can you tell them apart?

The lesson lists: (1) demographic information (background variables like gender, marital status, education, age—only if relevant), (2) knowledge questions (what respondents know, e.g., awareness of counterfeit drugs), (3) attitude items (liking/disliking toward a concept, commonly measured with a Likert scale), and (4) self-perception questions (respondents’ evaluation of their own behavior/ability, e.g., how well they think they can identify counterfeit drugs).

What are the main differences between closed-ended and open-ended questionnaire items?

Closed-ended questions provide prepared options and respondents select an answer, which can introduce bias because choices are constrained; adding an “other” option can reduce forced-choice effects. Open-ended questions allow respondents to express thoughts freely, such as asking for recommendations to improve online learning. Closed-ended items are easier to code; open-ended items can capture nuance but require more analysis.

Which wording problems most often distort questionnaire responses, and what fixes are recommended?

Key pitfalls include: double-barreled items (ask for two things at once), negative wording that confuses interpretation, response-set patterns where respondents notice a trend and answer the same way, and vague frequency terms like “often” or “regularly.” Fixes include using simple language, ensuring each item asks for one clear idea, using neutral wording, staggering items to break patterns, and replacing vague frequency terms with actual frequency categories (e.g., once, twice, thrice, more than three).

Why should rating scales and questionnaire length be handled carefully?

Rating scales should be consistent across the instrument so respondents face a stable response pattern (avoid mixing “very high/high/low” with “very satisfied/satisfied/not satisfied” if possible). Length should be brief, clear, and concise—avoid duplicating questions and irrelevant items. Piloting helps identify questions that are duplicated, unclear, or tiring, improving completion and data quality.

Review Questions

  1. How would you decide whether a demographic variable belongs in a questionnaire? Give an example tied to a research problem.
  2. Rewrite a vague frequency question (using “often” or “regularly”) into a frequency-specific item consistent with the lesson’s guidance.
  3. What steps would you take to prevent response-set bias when using multiple Likert-style items?

Key Points

  1. 1

    A questionnaire’s effectiveness depends on response rate, completion rate, and response validity—returned, fully answered, and honest responses.

  2. 2

    A questionnaire is a standardized set of items; it can be self-administered or read aloud, in which case it functions like an interview schedule.

  3. 3

    Questionnaire items must align with the study’s research questions and conceptual framework to avoid irrelevant questions.

  4. 4

    Questionnaires should seek information types that match the variables: demographic data, knowledge, attitudes (often via Likert scales), and self-perception.

  5. 5

    Closed-ended questions offer fixed response options but can bias answers; open-ended questions allow free responses but require more analysis.

  6. 6

    Question wording should be simple, neutral, single-idea, and unambiguous—avoiding double-barreled items, confusing negatives, and vague terms like “often.”

  7. 7

    Use consistent rating scales and keep the questionnaire brief; piloting helps remove duplication and reduce respondent fatigue.

Highlights

Response rate, completion rate, and response validity are treated as the core indicators of whether a questionnaire can produce usable research results.
The lesson distinguishes “questions of research” (questionnaire items) from “research questions” (investigative prompts) and insists on alignment with the conceptual framework.
Attitudes are positioned as best captured with Likert scale items, while self-perception questions measure perceived ability rather than knowledge.
Closed-ended questions can force choices and introduce bias, while open-ended questions give respondents freedom to explain recommendations.
Avoiding response-set bias and vague frequency wording (“often,” “regularly”) is presented as essential for clearer, more interpretable data.

Topics

  • Questionnaires
  • Types of Information
  • Closed vs Open-Ended Questions
  • Questionnaire Design Rules
  • Likert Scale Items

Mentioned