17. SEMinR Series | Evaluating Formative Model | Step 1: Redundancy Analysis & Step 2: Collinearity
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Convergent validity for formative constructs is assessed via redundancy analysis against an alternative global single item, not through reflective-style validity metrics.
Briefing
Evaluating a formative measurement model hinges on two checks that must be planned before data collection: redundancy analysis for convergent validity and variance inflation factors (VIF) for collinearity. Convergent validity in formative models is not assessed through the usual reflective logic. Instead, it measures how strongly a formatively specified construct correlates with an alternative, reflectively measured variable representing the same concept—using a “global single item” as the alternative measure. Researchers must include this global item in the questionnaire at the design stage; once the data are collected, the convergent validity test can’t be performed because the alternative measure is missing.
The redundancy analysis procedure treats each formative construct separately. For a construct like “Vision” measured formatively by multiple indicators, the questionnaire also includes a single global item capturing the essence of Vision (e.g., “I believe in the vision of the organization”). After estimating a simple model linking the formative construct to that global item, the resulting path coefficient is used to judge convergent validity. A commonly cited benchmark is a correlation of at least 0.78 between the formative construct and the global single item; the transcript notes that this threshold implies the construct explains more than 50% of the alternative measure’s variance. In the worked example, the path coefficient from Vision to Vision_Global is 0.818, clearing the recommended threshold (the transcript also references 0.70 as a minimum). Passing this test provides support that the formative construct has convergent validity.
The second step—collinearity—addresses a different risk unique to formative models: indicator weights become unstable when formative indicators are highly correlated. High correlation inflates standard errors of indicator weights and increases the chance of type II errors (false negatives). The standard diagnostic is the variance inflation factor (VIF). VIF values of 5 or above signal collinearity problems, prompting corrective actions such as eliminating or merging indicators, or restructuring the measurement model (for example, using higher-order constructs).
In the practical workflow, results are stored in an R summary object (the transcript references inspecting a “summary_simple” object and then drilling into a validity section containing “VIFore items”). While VIF is also computed for reflective indicators, those values aren’t interpreted here because reflective indicators are expected to correlate; the focus stays on VIF for formative indicators. In the example, all formative-indicator VIF values fall below 5, so collinearity is not a concern.
With redundancy analysis confirming convergent validity and VIF diagnostics showing no collinearity issues, the process is ready to move to the next evaluation step: indicator weights.
Cornell Notes
Formative measurement model evaluation requires two early diagnostics: redundancy analysis for convergent validity and VIF checks for collinearity. Convergent validity is assessed by correlating each formative construct with an alternative reflectively measured “global single item” that captures the construct’s essence; this global item must be included in the questionnaire during study design. After estimating a simple model from the formative construct to the global item, a path coefficient around 0.78 or higher supports convergent validity (the example reports 0.818 for Vision). Collinearity is then checked using variance inflation factors for formative indicators; VIF values of 5 or above indicate problems. In the example, all formative-indicator VIF values are below 5, so collinearity is not an issue.
Why can’t convergent validity be assessed for formative constructs after data collection?
What exactly is a “global item,” and how is it used in redundancy analysis?
What threshold is used to judge convergent validity in formative models?
How does collinearity create problems in formative measurement models?
When inspecting VIF values, why focus on formative indicators rather than reflective ones?
Review Questions
- What information must be added to a questionnaire upfront to enable redundancy analysis for formative convergent validity?
- In a redundancy analysis, what does a path coefficient like 0.818 between a formative construct and its global item indicate?
- What VIF threshold signals collinearity concerns for formative indicators, and what corrective actions are suggested?
Key Points
- 1
Convergent validity for formative constructs is assessed via redundancy analysis against an alternative global single item, not through reflective-style validity metrics.
- 2
A global single item must be included in the questionnaire during research design; it can’t be added after data collection if convergent validity is needed.
- 3
Redundancy analysis is run separately for each formative construct by estimating a model linking the formative construct to its corresponding global item.
- 4
A redundancy-analysis path coefficient around 0.78 or higher supports convergent validity (with 0.70 cited as a lower concern threshold).
- 5
Collinearity in formative models is diagnosed using VIF; values of 5 or above indicate collinearity problems.
- 6
If formative-indicator VIF values are all below 5, collinearity is not a concern and the workflow can proceed to the next step (indicator weights).