How to Develop a Simple Questionnaire/Scale to Measure a Concept that Doesn't have a Scale
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start scale development by selecting the target concept and defining it operationally for the specific study context.
Briefing
When a concept lacks an existing questionnaire or scale, the practical path is to build one from scratch by (1) defining the concept for the specific study and (2) deriving measurable characteristics from both literature and expert input. The core idea is that scale development starts with conceptual work: first clarify what the concept means in this context, then translate that meaning into questionnaire items that can later be tested statistically.
The process begins by selecting the target concept to measure—community commitment is used as an example. Next comes operationalization: the concept must be defined in a way that fits the study’s goals. Even if related constructs like “commitment” have definitions, “community commitment” may not. In that case, researchers should review how commitment is defined in existing scholarship and use those descriptions to draft a study-specific definition. But that draft should not be created in isolation. Expert review is treated as essential: practitioners and community leaders who have worked on projects, alongside academic experts who have studied project-related phenomena, should weigh in on whether the proposed definition captures the concept as they understand it.
After the definition is shaped, the next task is to identify key characteristics—what “community commitment” consists of in observable terms. Two sources feed this step. Relevant research helps generate keywords and characteristic themes tied to commitment in community or engagement contexts. Then experts are asked directly what commitment means to them and how communities become committed or can increase their commitment. The output of these two streams is a set of key characteristics that can be turned into questionnaire statements.
Once key characteristics are identified, researchers draft items that directly reflect those characteristics. The transcript illustrates this with example statements such as believing in the project’s objectives, identifying with the project, wanting the project to succeed, and viewing project failure as the community’s failure. These items can form a unidimensional scale when the concept is treated as a single factor—no sub-dimensions are assumed, and all items measure the same underlying construct.
The framework also allows for multidimensional structure. If theory or preliminary reasoning suggests that the concept splits into distinct components, items can be grouped into factors. The example imagines dividing commitment into “continuous commitment” and “effective commitment,” where each factor is measured by its own subset of items, and both factors together represent the overall construct. Importantly, the transcript emphasizes that factor structure must ultimately be validated with data; exploratory factor analysis is flagged as the later step to confirm whether the proposed grouping holds.
In short, building a new questionnaire for an unscaled concept is less about inventing items and more about disciplined conceptualization: define the construct for the study, extract characteristics from literature, validate and refine them with practitioners and academics, draft items (and possible factors), then test the structure empirically. This sequence turns an abstract idea into a measurable scale that can be evaluated and improved.
Cornell Notes
A scale can be developed for a concept that lacks an existing measurement tool by starting with definition and operationalization, then converting conceptual characteristics into questionnaire items. First, researchers define the target concept in the context of their study (e.g., “community commitment”) by drawing on related definitions in the literature and refining that definition through input from both practitioners/community leaders and academic experts. Next, they identify key characteristics using keywords/themes from relevant research and expert interviews or feedback, then draft statements that reflect those characteristics. Items can be organized as a unidimensional scale (one factor) or grouped into multiple factors (dimensions) if the concept appears to split into components. Finally, exploratory factor analysis is used later to test whether the factor structure fits the collected data.
How does operationalizing a concept like “community commitment” differ from simply finding an existing definition?
Where do the key characteristics used to write questionnaire items come from?
What is the relationship between key characteristics and questionnaire statements?
When should a scale be treated as unidimensional versus multidimensional?
Why does exploratory factor analysis appear later in the process?
Review Questions
- What steps are required to operationalize a concept that has no existing scale, and why is expert input included?
- How do literature-derived keywords and expert feedback jointly determine the questionnaire items?
- What evidence would justify moving from a unidimensional scale to a multidimensional one, and what statistical method is suggested to test it?
Key Points
- 1
Start scale development by selecting the target concept and defining it operationally for the specific study context.
- 2
Draft the concept definition using related literature, then refine it through input from both practitioners/community leaders and academic experts.
- 3
Identify key characteristics by combining literature keywords/themes with expert judgments about what commitment means and how it grows.
- 4
Convert each key characteristic into clear questionnaire statements that directly reflect the construct.
- 5
Use a unidimensional structure when the concept is treated as one factor; group items into factors only when distinct dimensions are plausible.
- 6
Validate any proposed factor structure after data collection using exploratory factor analysis.