Get AI summaries of any video or article — Sign up free
How to Develop a Simple Questionnaire/Scale to Measure a Concept that Doesn't have a Scale thumbnail

How to Develop a Simple Questionnaire/Scale to Measure a Concept that Doesn't have a Scale

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start scale development by selecting the target concept and defining it operationally for the specific study context.

Briefing

When a concept lacks an existing questionnaire or scale, the practical path is to build one from scratch by (1) defining the concept for the specific study and (2) deriving measurable characteristics from both literature and expert input. The core idea is that scale development starts with conceptual work: first clarify what the concept means in this context, then translate that meaning into questionnaire items that can later be tested statistically.

The process begins by selecting the target concept to measure—community commitment is used as an example. Next comes operationalization: the concept must be defined in a way that fits the study’s goals. Even if related constructs like “commitment” have definitions, “community commitment” may not. In that case, researchers should review how commitment is defined in existing scholarship and use those descriptions to draft a study-specific definition. But that draft should not be created in isolation. Expert review is treated as essential: practitioners and community leaders who have worked on projects, alongside academic experts who have studied project-related phenomena, should weigh in on whether the proposed definition captures the concept as they understand it.

After the definition is shaped, the next task is to identify key characteristics—what “community commitment” consists of in observable terms. Two sources feed this step. Relevant research helps generate keywords and characteristic themes tied to commitment in community or engagement contexts. Then experts are asked directly what commitment means to them and how communities become committed or can increase their commitment. The output of these two streams is a set of key characteristics that can be turned into questionnaire statements.

Once key characteristics are identified, researchers draft items that directly reflect those characteristics. The transcript illustrates this with example statements such as believing in the project’s objectives, identifying with the project, wanting the project to succeed, and viewing project failure as the community’s failure. These items can form a unidimensional scale when the concept is treated as a single factor—no sub-dimensions are assumed, and all items measure the same underlying construct.

The framework also allows for multidimensional structure. If theory or preliminary reasoning suggests that the concept splits into distinct components, items can be grouped into factors. The example imagines dividing commitment into “continuous commitment” and “effective commitment,” where each factor is measured by its own subset of items, and both factors together represent the overall construct. Importantly, the transcript emphasizes that factor structure must ultimately be validated with data; exploratory factor analysis is flagged as the later step to confirm whether the proposed grouping holds.

In short, building a new questionnaire for an unscaled concept is less about inventing items and more about disciplined conceptualization: define the construct for the study, extract characteristics from literature, validate and refine them with practitioners and academics, draft items (and possible factors), then test the structure empirically. This sequence turns an abstract idea into a measurable scale that can be evaluated and improved.

Cornell Notes

A scale can be developed for a concept that lacks an existing measurement tool by starting with definition and operationalization, then converting conceptual characteristics into questionnaire items. First, researchers define the target concept in the context of their study (e.g., “community commitment”) by drawing on related definitions in the literature and refining that definition through input from both practitioners/community leaders and academic experts. Next, they identify key characteristics using keywords/themes from relevant research and expert interviews or feedback, then draft statements that reflect those characteristics. Items can be organized as a unidimensional scale (one factor) or grouped into multiple factors (dimensions) if the concept appears to split into components. Finally, exploratory factor analysis is used later to test whether the factor structure fits the collected data.

How does operationalizing a concept like “community commitment” differ from simply finding an existing definition?

Operationalization means defining what the concept will mean specifically in the study—what it includes and how it will be measured. Even if “commitment” has established definitions, “community commitment” may not. The approach is to review how commitment is defined in existing literature, draft a study-specific definition, and then validate or refine it by asking experts (practitioners/community leaders with project experience and academic researchers). This ensures the operational definition matches how the concept is understood in practice and research.

Where do the key characteristics used to write questionnaire items come from?

Key characteristics come from two sources. First, relevant research provides keywords and themes about commitment in community/engagement contexts. Second, experts are asked what commitment means to them and how communities become committed or increase commitment. Combining literature-derived themes with expert perspectives yields the characteristic set that items should reflect.

What is the relationship between key characteristics and questionnaire statements?

Key characteristics are translated into concrete, measurable statements. For example, if a characteristic is belief in the project’s purpose, an item might be “I believe in the objectives of the project.” If another characteristic is identification with the project, an item might be “I identify with the project.” The transcript’s examples show multiple items that directly map onto the identified characteristics.

When should a scale be treated as unidimensional versus multidimensional?

A unidimensional scale is used when the concept is treated as one underlying factor, meaning all items measure the same construct without assumed sub-dimensions. A multidimensional approach is considered when the concept can be reasonably split into distinct components (e.g., “continuous commitment” and “effective commitment”), with different item sets measuring different factors. Either way, the proposed structure must be tested with data later.

Why does exploratory factor analysis appear later in the process?

Item writing and factor grouping are initially based on conceptual work—literature themes and expert input. Exploratory factor analysis comes after data collection to check whether the items actually cluster into the proposed factors (or remain one factor). This step validates whether the conceptual factor structure matches empirical patterns.

Review Questions

  1. What steps are required to operationalize a concept that has no existing scale, and why is expert input included?
  2. How do literature-derived keywords and expert feedback jointly determine the questionnaire items?
  3. What evidence would justify moving from a unidimensional scale to a multidimensional one, and what statistical method is suggested to test it?

Key Points

  1. 1

    Start scale development by selecting the target concept and defining it operationally for the specific study context.

  2. 2

    Draft the concept definition using related literature, then refine it through input from both practitioners/community leaders and academic experts.

  3. 3

    Identify key characteristics by combining literature keywords/themes with expert judgments about what commitment means and how it grows.

  4. 4

    Convert each key characteristic into clear questionnaire statements that directly reflect the construct.

  5. 5

    Use a unidimensional structure when the concept is treated as one factor; group items into factors only when distinct dimensions are plausible.

  6. 6

    Validate any proposed factor structure after data collection using exploratory factor analysis.

Highlights

When no scale exists, the work begins with operationalizing the concept—turning an abstract idea into a study-specific definition.
Expert input is treated as a gatekeeper: both practitioners and academics help validate what the construct should include.
Questionnaire items are built from key characteristics extracted from literature themes and expert feedback, not from guesswork.
A concept can be measured as unidimensional or split into factors, but the factor structure must be confirmed with exploratory factor analysis later.

Topics

Mentioned

  • EFA