LESSON 66- RESEARCH METHODOLOGY||SECTION 3.6 DATA COLLECTION INSTRUMENTS & 3.7 COLLECTION PROCEDURES
Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Data collection quality depends on matching instruments to the chosen data collection methods and the study’s research paradigm and design.
Briefing
Data quality hinges on the tools and steps used to gather it: research proposal section 3.6 focuses on data collection instruments, while section 3.7 lays out the procedures for collecting data in the field. The core message is that instruments must match the chosen data collection method and the study’s research paradigm, and that instruments should be piloted to ensure they produce data that is both valid and reliable before the main study begins. Because final findings, conclusions, and recommendations depend on the accuracy of collected data, the proposal must clearly justify not only what tools will be used, but why they are suitable for answering the research problem, questions, and hypotheses.
The lesson starts by defining data collection as more than administering questionnaires or gathering responses—it includes measuring, and then analyzing accurate data from relevant sources to address the research problem. Four factors shape what data collection will look like: the kind of data needed (numerical, narrative, or both), the method of data collection, the type of data (categorical such as nominal/ordinal versus continuous such as interval/ratio), and how data will be scored to enable analysis. From these considerations, four main data collection methods are identified: administration of questionnaires, observation, interviews, and document analysis. Crucially, “method” refers to the way of doing something, while “instrument” is the specific tool used with that method.
A mapping is provided between methods and instruments. Questionnaires use a questionnaire instrument and are typically associated with quantitative research; probing during open-ended questions can yield both quantitative and qualitative-style responses. Observation uses either an observation guide (qualitative, unstructured) or an observation schedule (quantitative, structured). Interviews can be one-on-one or group formats such as focus group discussions; these use an interview guide (qualitative), a focus group discussion guide (qualitative), or an interview schedule (quantitative), described as a questionnaire administered orally without probing. Document analysis relies on a document analysis guide to elicit qualitative data.
Section 3.6 then requires detailed instrument description: how the tool is structured, who it is developed for, what information it elicits, and why the selected instrument(s) are appropriate compared with alternatives. The lesson also emphasizes triangulation—using more than one method or instrument to strengthen credibility—while noting that the expectation may vary by academic level. At graduate level, using multiple instruments is framed as more important to support triangulation.
Before going to the main study, instruments must be piloted. Piloting evaluates instrument efficacy, sampling strategies, and data collection methods using a sample with similar characteristics to the target population. It helps identify unclear questions, logistical problems, resource needs, and whether the sampling frame and techniques work. Piloting is described as enhancing validity and reliability rather than determining them. Validity is treated as the appropriateness or usefulness of inferences, while reliability is the consistency of the data produced by the instrument.
Section 3.7 shifts from tools to fieldwork steps: obtaining authorization and permits, visiting sites to build rapport and familiarize with the setting, training research assistants if used, and administering instruments according to the planned procedures. The lesson closes by positioning section 3.8 (analysis) as the next step after instruments are piloted and procedures are executed.
Cornell Notes
The lesson distinguishes data collection instruments (Section 3.6) from data collection procedures (Section 3.7) and ties both to data quality. Four factors shape data collection: the kind of data needed (numerical/narrative), the method, the data type (categorical vs continuous), and how data will be scored for analysis. Four core methods—questionnaires, observation, interviews, and document analysis—each pair with specific instruments (e.g., observation guide vs observation schedule; interview guide vs interview schedule). Instruments must be piloted on a similar-but-not-main sample to check clarity, logistics, sampling fit, and to enhance validity and reliability. After piloting, the proposal must specify how validity and reliability will be determined, then outline field procedures such as authorization, site visits, rapport-building, assistant training, and instrument administration.
Why does the proposal need to treat “instruments” differently from “methods” of data collection?
How do the four data collection methods map to the instruments mentioned?
What four factors shape the design of data collection in this lesson?
What is the purpose of piloting instruments, and what sample is used?
How are validity and reliability treated after piloting?
What steps belong in Section 3.7 data collection procedures?
Review Questions
- What instrument would fit a structured observation approach, and how does it differ from an unstructured observation guide?
- Why does the lesson insist that piloting uses a sample similar to the main study but not included in it?
- How should a researcher justify choosing one instrument over another when multiple instruments could collect similar information?
Key Points
- 1
Data collection quality depends on matching instruments to the chosen data collection methods and the study’s research paradigm and design.
- 2
Data collection should be planned around four factors: data kind (numerical/narrative), method, data type (categorical vs continuous), and scoring for analysis.
- 3
Four primary data collection methods—questionnaires, observation, interviews, and document analysis—each require specific instruments (e.g., observation guide vs observation schedule).
- 4
Section 3.6 must describe instrument structure, target users, elicited information, suitability, and justification compared with alternatives.
- 5
Triangulation is encouraged through using more than one method/instrument, with expectations potentially increasing at graduate level.
- 6
Piloting is required before the main study to test clarity, logistics, sampling fit, and instrument efficacy, using a similar-but-not-main sample.
- 7
Section 3.7 must outline field procedures including authorization, site visits, rapport-building, assistant training, and instrument administration.