Get AI summaries of any video or article — Sign up free
RESEARCH OBJECTIVES, RESEARCH QUESTIONS & HYPOTHESES thumbnail

RESEARCH OBJECTIVES, RESEARCH QUESTIONS & HYPOTHESES

5 min read

Based on RESEARCH METHODS CLASS WITH PROF. LYDIAH WAMBUGU's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Research is problem-driven: a gap between literature and what researchers want to know determines the study’s variables and structure.

Briefing

A research study’s core structure hinges on a tight chain: a clearly defined research problem creates objectives, objectives shape investigative research questions, and those questions lead to testable hypotheses. The lesson matters because it turns what often feels like interchangeable wording—“objectives,” “questions,” and “hypotheses”—into a coherent system that determines what gets measured, what gets analyzed, and what conclusions can legitimately follow.

The session begins with poll results that frame the discussion. Most participants agree that a “good research study” must include objectives, research questions, and hypotheses. They also agree that objectives should be broader than research questions: the purpose (general objective) sets direction, while the questions narrow the focus to what will actually be answered. Another key poll outcome links all three concepts to determining the relationship between variables.

From there, the class grounds the framework in a problem-driven view of research. Research is likened to going to a hospital: people seek care because something is missing or wrong, and the doctor’s treatment depends on the problem description. Likewise, research begins with a gap between what literature says and what researchers want to know next. That gap—derived from the title and refined through preliminary literature review—drives the selection of variables and ultimately the study’s objectives, questions, and hypotheses.

A major portion of the lesson clarifies how titles translate into variables and then into objectives and questions. Using an example title about “influence of institutional capacity on the performance of parastatals,” the instructor shows how the independent variable (institutional capacity) and dependent variable (performance) become the backbone of the study. The independent variable is then broken into measurable components (e.g., capacity building, stakeholder participation, resource mobilization, employees’ attitude). Objectives are written to assess specific relationships between these components and performance, and research questions mirror the objectives but in question form.

The class also stresses coherence in wording. If the title uses “influence,” objectives and questions should use corresponding action terms (e.g., “determine the influence of…”). Objectives must be specific, measurable, and aligned with the title; they use active verbs such as “determine,” “assess,” “compare,” “evaluate,” and “establish.” The instructor links verb choice to methodological implications: “determine the relationship” points toward correlational approaches and correlation analysis; “establish the effect” implies before-and-after intervention logic (experimental or quasi-experimental designs); “determine the influence” suggests cross-sectional survey logic; and “to what extent” requires baseline data and scale-based measurement.

Finally, hypotheses are treated as predictions that must include “significant” relationship or “significant” difference. Hypotheses are not written as “influence/effect” claims; instead, they are tested using inferential statistics to infer population parameters from sample statistics. The lesson explains confidence levels (commonly 95%), significance levels (alpha, often 0.05), and the decision rule using p-values: if p < alpha, reject the null hypothesis; if p > alpha, fail to reject it. It closes by distinguishing null and alternative hypotheses, directional versus non-directional alternatives, and the practical reality that conclusions in Chapter 4 follow the statistical decision rather than the wording of the hypothesis itself.

Cornell Notes

The lesson lays out a chain from research problem to measurable study components: a research problem (gap in literature) generates objectives, objectives are translated into investigative research questions, and those questions lead to hypotheses that can be statistically tested. Objectives should be broader than research questions, while research questions should be specific and mirror the objectives in order and wording. Hypotheses differ from questions because they must predict “significant” relationships or “significant” differences and are tested using inferential statistics. The choice of objective wording (e.g., relationship vs effect vs extent) signals the likely design and analysis approach, and hypothesis testing decisions follow p-values compared against alpha.

How does a research problem determine the rest of a study’s structure?

The research problem is defined as the gap between what literature says and what researchers still need to know. That gap is derived from the research title and refined through preliminary literature review. Once the gap is identified, it determines which variables matter, and those variables are then translated into objectives. Objectives then shape investigative research questions, and the questions inform what hypotheses must be tested.

Why should objectives be broader than research questions?

The general objective (purpose) sets direction and is broad—often written in statement form using “investigate” or “aims to.” Specific objectives narrow the focus and are written in a way that flows from the title and variables. Research questions then become the question form of those specific objectives, making them more precise about what will be answered.

How does the wording of objectives (relationship vs effect vs extent) affect research design and analysis?

Verb choice signals methodological logic. “Determine the relationship” implies correlational logic and typically correlation analysis (e.g., Pearson Product Moment Correlation for continuous variables). “Establish the effect” implies an intervention with before-and-after comparison, pointing to experimental or quasi-experimental designs. “Determine the influence” is linked to cross-sectional survey logic. “To what extent” requires baseline data and scale-based measurement (e.g., Likert-type scale questions) to judge change or magnitude.

What makes a research question different from a “question of research”?

Investigative research questions are the ones that guide what the study will answer through data collection and analysis; they are not yes/no items. The lesson distinguishes “questions of research” as the instrument items—questions placed in questionnaires, interview guides, or observation guides. A yes/no style item like “Does training influence sustainability?” is treated as a research question of research interest, but the actual research questions used to drive analysis must be investigative and aligned with objectives.

What distinguishes hypotheses from objectives and research questions?

Hypotheses are predictions that must include “significant” relationship or “significant” difference. They are not written as “significant influence/effect” claims. Hypotheses are tested using inferential statistics to infer population parameters from sample statistics. The lesson also emphasizes that in Chapter 4, researchers reject or fail to reject the null hypothesis rather than “accept” hypotheses.

How is the decision made when testing hypotheses using p-values?

Researchers set alpha (significance level) based on the confidence level (commonly 95%, so alpha = 0.05). They compare the calculated p-value to alpha: if p < alpha, reject the null hypothesis and conclude there is a significant relationship/difference; if p > alpha, fail to reject the null and conclude there is no significant relationship/difference. The conclusion follows this rule, not the wording of the hypothesis.

Review Questions

  1. In what ways do objectives, research questions, and hypotheses differ in purpose and wording, and how does that affect what gets measured and analyzed?
  2. Given an objective phrased as “establish the effect of X on Y,” what design logic and analysis approach does the lesson associate with it?
  3. Explain how alpha and p-values determine whether the null hypothesis is rejected or failed to be rejected, and what conclusion follows each case.

Key Points

  1. 1

    Research is problem-driven: a gap between literature and what researchers want to know determines the study’s variables and structure.

  2. 2

    General objectives (purpose) are broad; specific objectives narrow the focus and research questions translate those objectives into investigative question form.

  3. 3

    Objective wording must align with the title (e.g., “influence/effect/effectiveness/extent”) to maintain coherence from Chapter 1 through analysis.

  4. 4

    Verb choice in objectives signals methodological implications: relationship often aligns with correlational logic, effect with intervention before/after logic, and extent with baseline and scale measurement.

  5. 5

    Research questions are investigative (not yes/no) and should be ordered to match their corresponding specific objectives.

  6. 6

    Hypotheses must predict “significant” relationships or “significant” differences and are tested using inferential statistics; conclusions follow p-value vs alpha decisions.

  7. 7

    In hypothesis testing, researchers reject or fail to reject the null hypothesis (not “accept” hypotheses), and the Chapter 4 conclusion must match the statistical decision rule.

Highlights

A title’s variables (IV and DV) should directly generate objectives, which then generate research questions in matching order and wording.
“To determine the relationship” vs “establish the effect” vs “to what extent” is more than style—it signals different design and analysis expectations.
Hypotheses must include “significant” relationship/difference; they should not be phrased as “significant influence/effect.”
The p-value decision rule is straightforward: p < alpha leads to rejecting the null; p > alpha leads to failing to reject it.
Research questions guide instrument construction and analysis choices, while “questions of research” live inside questionnaires/interview guides/observation tools.

Topics

Mentioned