Get AI summaries of any video or article — Sign up free
LESSON 66- RESEARCH METHODOLOGY||SECTION 3.6 DATA COLLECTION INSTRUMENTS & 3.7 COLLECTION PROCEDURES thumbnail

LESSON 66- RESEARCH METHODOLOGY||SECTION 3.6 DATA COLLECTION INSTRUMENTS & 3.7 COLLECTION PROCEDURES

5 min read

Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Data collection quality depends on matching instruments to the chosen data collection methods and the study’s research paradigm and design.

Briefing

Data quality hinges on the tools and steps used to gather it: research proposal section 3.6 focuses on data collection instruments, while section 3.7 lays out the procedures for collecting data in the field. The core message is that instruments must match the chosen data collection method and the study’s research paradigm, and that instruments should be piloted to ensure they produce data that is both valid and reliable before the main study begins. Because final findings, conclusions, and recommendations depend on the accuracy of collected data, the proposal must clearly justify not only what tools will be used, but why they are suitable for answering the research problem, questions, and hypotheses.

The lesson starts by defining data collection as more than administering questionnaires or gathering responses—it includes measuring, and then analyzing accurate data from relevant sources to address the research problem. Four factors shape what data collection will look like: the kind of data needed (numerical, narrative, or both), the method of data collection, the type of data (categorical such as nominal/ordinal versus continuous such as interval/ratio), and how data will be scored to enable analysis. From these considerations, four main data collection methods are identified: administration of questionnaires, observation, interviews, and document analysis. Crucially, “method” refers to the way of doing something, while “instrument” is the specific tool used with that method.

A mapping is provided between methods and instruments. Questionnaires use a questionnaire instrument and are typically associated with quantitative research; probing during open-ended questions can yield both quantitative and qualitative-style responses. Observation uses either an observation guide (qualitative, unstructured) or an observation schedule (quantitative, structured). Interviews can be one-on-one or group formats such as focus group discussions; these use an interview guide (qualitative), a focus group discussion guide (qualitative), or an interview schedule (quantitative), described as a questionnaire administered orally without probing. Document analysis relies on a document analysis guide to elicit qualitative data.

Section 3.6 then requires detailed instrument description: how the tool is structured, who it is developed for, what information it elicits, and why the selected instrument(s) are appropriate compared with alternatives. The lesson also emphasizes triangulation—using more than one method or instrument to strengthen credibility—while noting that the expectation may vary by academic level. At graduate level, using multiple instruments is framed as more important to support triangulation.

Before going to the main study, instruments must be piloted. Piloting evaluates instrument efficacy, sampling strategies, and data collection methods using a sample with similar characteristics to the target population. It helps identify unclear questions, logistical problems, resource needs, and whether the sampling frame and techniques work. Piloting is described as enhancing validity and reliability rather than determining them. Validity is treated as the appropriateness or usefulness of inferences, while reliability is the consistency of the data produced by the instrument.

Section 3.7 shifts from tools to fieldwork steps: obtaining authorization and permits, visiting sites to build rapport and familiarize with the setting, training research assistants if used, and administering instruments according to the planned procedures. The lesson closes by positioning section 3.8 (analysis) as the next step after instruments are piloted and procedures are executed.

Cornell Notes

The lesson distinguishes data collection instruments (Section 3.6) from data collection procedures (Section 3.7) and ties both to data quality. Four factors shape data collection: the kind of data needed (numerical/narrative), the method, the data type (categorical vs continuous), and how data will be scored for analysis. Four core methods—questionnaires, observation, interviews, and document analysis—each pair with specific instruments (e.g., observation guide vs observation schedule; interview guide vs interview schedule). Instruments must be piloted on a similar-but-not-main sample to check clarity, logistics, sampling fit, and to enhance validity and reliability. After piloting, the proposal must specify how validity and reliability will be determined, then outline field procedures such as authorization, site visits, rapport-building, assistant training, and instrument administration.

Why does the proposal need to treat “instruments” differently from “methods” of data collection?

A method is the approach for collecting data (e.g., questionnaires, observation, interviews, document analysis). An instrument is the specific tool used within that method (e.g., a questionnaire; an observation guide or observation schedule; an interview guide, focus group discussion guide, or interview schedule; a document analysis guide). The proposal should show coherence: the chosen instrument must fit the method and the study’s paradigm/design so the collected data can answer the research questions and test hypotheses.

How do the four data collection methods map to the instruments mentioned?

Questionnaires use a questionnaire instrument, typically linked to quantitative research; probing during open-ended questions can generate both numerical summaries and richer responses. Observation uses two instruments: an observation guide (qualitative, unstructured) and an observation schedule (quantitative, structured). Interviews can be one-on-one or focus group discussions; qualitative formats use an interview guide or focus group discussion guide, while quantitative interviews use an interview schedule described as an orally administered questionnaire without probing. Document analysis uses a document analysis guide to collect qualitative data.

What four factors shape the design of data collection in this lesson?

First, the kind of data needed: numerical, narrative, or both. Second, the method(s) of data collection to be used. Third, the type of data: categorical (nominal/ordinal) versus continuous (interval/ratio). Fourth, scoring of the data to enable analysis—especially important for quantitative analysis because the planned analysis depends on how the data will be measured and scored.

What is the purpose of piloting instruments, and what sample is used?

Piloting tests whether the instrument elicits the required information and whether the research process works in practice. It uses a sample that is not part of the main study but has similar characteristics to the target population. Piloting helps identify failures such as unclear questions, inappropriate sampling frame/techniques, logistical problems, and resource needs, and it checks whether respondents interpret and answer questions correctly.

How are validity and reliability treated after piloting?

Piloting enhances validity and reliability but does not determine them. Validity is framed as the appropriateness/usefulness of the instrument for making correct inferences, while reliability is the consistency of the data the instrument produces. The researcher must explain how validity (for quantitative and qualitative instruments) and reliability will be determined, with validity addressed first and reliability focused on consistency.

What steps belong in Section 3.7 data collection procedures?

Section 3.7 outlines the fieldwork steps: obtain authorization and permits, visit the site to build rapport and familiarize with the environment, train research assistants if they are involved, and administer the instruments according to the planned procedures. The goal is to ensure the instrument administration and data gathering happen in an organized, ethical, and replicable way.

Review Questions

  1. What instrument would fit a structured observation approach, and how does it differ from an unstructured observation guide?
  2. Why does the lesson insist that piloting uses a sample similar to the main study but not included in it?
  3. How should a researcher justify choosing one instrument over another when multiple instruments could collect similar information?

Key Points

  1. 1

    Data collection quality depends on matching instruments to the chosen data collection methods and the study’s research paradigm and design.

  2. 2

    Data collection should be planned around four factors: data kind (numerical/narrative), method, data type (categorical vs continuous), and scoring for analysis.

  3. 3

    Four primary data collection methods—questionnaires, observation, interviews, and document analysis—each require specific instruments (e.g., observation guide vs observation schedule).

  4. 4

    Section 3.6 must describe instrument structure, target users, elicited information, suitability, and justification compared with alternatives.

  5. 5

    Triangulation is encouraged through using more than one method/instrument, with expectations potentially increasing at graduate level.

  6. 6

    Piloting is required before the main study to test clarity, logistics, sampling fit, and instrument efficacy, using a similar-but-not-main sample.

  7. 7

    Section 3.7 must outline field procedures including authorization, site visits, rapport-building, assistant training, and instrument administration.

Highlights

In this framework, “method” is the approach and “instrument” is the tool—proposal writing should use “data collection instruments” in Section 3.6.
Observation splits into two instrument types: an unstructured observation guide for qualitative work and a structured observation schedule for quantitative work.
Interview schedules are described as orally administered questionnaires without probing, distinguishing them from qualitative interview guides.
Piloting is positioned as enhancing validity and reliability, not replacing the separate processes used to establish them.
Data collection procedures include authorization, site familiarization, rapport-building, assistant training, and controlled instrument administration.

Topics

Mentioned