Get AI summaries of any video or article — Sign up free
I am Lost, Where shall i start - Baby Steps for those beginning their Research Journey thumbnail

I am Lost, Where shall i start - Baby Steps for those beginning their Research Journey

Research With Fawad·
6 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Identify a broad area of interest first, then narrow to a specific topic so literature searching and variable selection stay manageable.

Briefing

Starting a master’s or PhD research journey often begins with a single problem: “I don’t know what to do.” The practical fix offered here is a step-by-step workflow that turns confusion into a buildable research plan—beginning with choosing the right interest, then narrowing to a topic, and finally grounding everything in what already exists in the literature.

The first move is to identify an area of interest—HR management, finance, marketing, supply chain management, project success, and other domains all count. From there, the process narrows to a topic of interest inside that area. In HRM, for example, topics can include leadership, corporate social responsibility, internal marketing, and knowledge management. Once the topic is set, the next task is to check what has already been done, what could be done next, and which theories have been used to explain similar relationships.

For “what’s available,” Google Scholar is presented as the easiest entry point. The key is not to read everything—hundreds of papers can appear for a single keyword—but to filter for recency (such as selecting papers from 2009 onward) to see current directions. From there, the strongest shortcut is to locate systematic reviews, which have become increasingly common in business and management research. A systematic review consolidates decades of work into one place, showing publication trends, the journals publishing most in the area, and the impact factors that help identify where to aim for publication.

Using servant leadership as an example, the systematic review approach yields several concrete research assets in one document: (1) how the concept has evolved through competing or complementary definitions, (2) the measures and scales used to operationalize servant leadership (including whether a scale is unidimensional or multidimensional), (3) where studies have been conducted and at what levels (including gaps such as limited multi-level studies and scarce qualitative research), and (4) a “nomological network” mapping antecedents, mediators, outcomes, and moderators. That network is treated as a blueprint for originality—if existing studies test certain mediators with certain outcomes, new research can shift to different mediators, outcomes, moderators, or even antecedents.

Systematic reviews also provide ready-made directions for future research, often in the form of proposed research questions. The guidance is to combine multiple questions where appropriate to build a coherent model, rather than copying a single question. They also reduce theory-hunting: instead of searching from scratch, researchers can identify which theories have already been used for similar variables.

If systematic reviews are unavailable, the alternative is to read the latest papers on the topic—ideally open access—then record findings in an organized way. The recommended minimum is to read and summarize roughly 15–20 papers, storing key details in an Excel sheet: why the study matters, what gaps it addresses, which theory it uses, contributions and limitations, variables, results, and even the scales or questionnaires. The final emphasis is on avoiding premature model-building. Early-career researchers are warned against proposing a framework after only a few papers, because they may not understand definitions, measurement, or theory well enough to operationalize the study correctly.

The workflow ends with writing and feedback: start writing only after literature review and concept clarity, get drafts reviewed by experts and peers, and treat critique as part of the process rather than a reason to quit. The overall message is that research direction comes from disciplined reading, structured note-taking, and expert consultation—not from rushing into a model before the concept is fully understood.

Cornell Notes

The core guidance is a practical roadmap for turning “I don’t know what to do” into a research plan. It starts by selecting an area of interest, narrowing to a specific topic, then using Google Scholar to find what exists—especially systematic reviews, which consolidate definitions, measures, study trends, and research gaps. Systematic reviews also provide a nomological network (antecedents, mediators, outcomes, moderators) and future research questions, helping researchers design original models without guessing. If systematic reviews aren’t available, the fallback is reading 15–20 recent papers and recording key details (theory, variables, measures, results, limitations) in an Excel sheet. The process discourages early model-building before concepts are understood and properly operationalized.

How does narrowing from “area of interest” to “topic of interest” prevent research from becoming unfocused?

The workflow begins with broad domains (e.g., HR management, finance, marketing, supply chain management). Inside each domain, there are many possible topics—hundreds in some fields. Choosing a topic like servant leadership (within leadership) or knowledge management (within HRM) limits the search space so literature searching, measurement selection, and theory choice can be done with precision rather than trying to cover everything at once.

Why are systematic reviews treated as a high-leverage starting point, and what concrete outputs do they provide?

Systematic reviews consolidate decades of work into one document, making it feasible to understand a topic without reading hundreds of papers. They typically show publication trends by year and type, identify the journals most active in the area (including impact factors), summarize competing definitions, and list the measures/scales used to operationalize the construct. They also map where studies were conducted and at what levels, and they compile antecedents, mediators, outcomes, and moderators into a “nomological network.”

How can a researcher use a systematic review’s nomological network to create originality instead of repeating prior models?

The nomological network reveals which antecedents, mediators, outcomes, and moderators have already been tested together. If prior studies repeatedly pair a mediator with the same outcome, originality can come from changing one element—testing a new mediator with an existing outcome, pairing an existing mediator with a new outcome, introducing a new moderator, or exploring under-studied antecedents. The guidance emphasizes proposing new combinations rather than reusing the same tested pathway.

What does “don’t propose in haste” mean in practical terms?

It means not building a model after reading only two or three papers. Researchers must understand definitions, how variables are measured (including the scale items and whether a scale is unidimensional or multidimensional), and which theory justifies the relationships. The transcript warns that insufficient reading and lack of expert consultation can lead to incorrect operationalization—such as using the wrong type of survey item format or applying the wrong statistical test for the study design.

If systematic reviews aren’t available, what alternative strategy is recommended?

The fallback is to search for the latest papers on the topic (the transcript mentions an open-access Academy of management paper from 2019 as an example). Researchers should read through multiple recent studies and record information systematically. The recommended effort is to read and summarize about 15–20 papers, capturing key details in an Excel sheet so gaps, measures, and theoretical patterns become visible.

How should researchers store and use information while reading papers?

The guidance is to store structured information, not just keep reading. The Excel sheet should capture the study’s value (why it matters), existing research and gaps, the theory used, contributions and limitations, variables and results, and any measurement instruments (questionnaires/scales). Summaries for 10–15 papers help researchers quickly locate where gaps, theoretical implications, and measurement tools appear when writing a thesis or proposal.

Review Questions

  1. What steps would you follow to move from a broad area (e.g., HRM) to a researchable topic (e.g., servant leadership) and then to a publishable research question?
  2. Which elements of a systematic review are most useful for designing originality, and how would you change one element (mediator/outcome/moderator/antecedent) to avoid duplication?
  3. How would you verify that your measurement plan and statistical approach match your research design before writing the proposal introduction?

Key Points

  1. 1

    Identify a broad area of interest first, then narrow to a specific topic so literature searching and variable selection stay manageable.

  2. 2

    Use Google Scholar with filters (such as recency) to map what has been done without attempting to read every result.

  3. 3

    Prioritize systematic reviews because they consolidate definitions, measures, study trends, journals, and research gaps into one source.

  4. 4

    Build originality by using the nomological network from systematic reviews to test new combinations of antecedents, mediators, outcomes, and moderators.

  5. 5

    Operationalize constructs early by extracting available scales/measures from the literature rather than reinventing measurement.

  6. 6

    Avoid premature model-building; understand definitions, theory, and measurement before proposing hypotheses or a conceptual framework.

  7. 7

    Store reading outputs in an Excel sheet (theory, variables, results, limitations, scales, and gaps) and get drafts reviewed by experts and peers.

Highlights

Systematic reviews provide more than summaries: they deliver definitions, measurement scales, publication trends, and a nomological network that can directly guide model design.
The fastest path to originality is not “new variables everywhere,” but changing one or more tested elements—mediators, outcomes, moderators, or antecedents—based on what prior studies already paired together.
Reading without recording doesn’t translate into writing; structured Excel notes help locate gaps, theories, and measurement instruments when drafting a thesis or proposal.
Early-career mistakes often come from rushing into a model without understanding operationalization—leading to wrong survey item formats or incorrect statistical tests for the design.