19. SEMinR Lecture Series - When to Delete or Not to Delete a Formative Indicator
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Do not delete formative indicators based on non-significant weights alone; protect content validity because formative indicators are non-interchangeable.
Briefing
Formative measurement models require a careful, content-driven deletion decision: non-significant indicator weights alone are not enough to justify removing an indicator, because doing so can break the construct’s domain coverage. The core rule is to treat formative indicators as non-interchangeable “building blocks” defined during conceptualization—so deletion must be justified by a combination of statistical evidence and relevance checks, while protecting content validity.
After checking collinearity and examining indicator weights/loadings, the decision process starts with indicator weight significance. If an indicator’s weight is statistically significant, it can usually be retained without further concern. If the weight is insignificant, the next gate is the indicator’s loading: when loadings are above 0.5, the indicator is generally kept. When loadings fall below 0.5, the indicator may be considered for deletion—but only if that deletion does not harm content validity.
This is where the lecture draws a sharp distinction between formative and reflective measurement. In formative models, indicators are not interchangeable; removing one can make the construct incomplete. The example uses a CSR construct defined as economic, legal, ethical, and philanthropic dimensions. Dropping the “economic” dimension would sacrifice content validity even if its statistical weight is not significant, because the conceptual domain would no longer be fully represented. In short: formative indicators with non-significant weights should not be automatically removed.
Beyond weight significance and loadings, the lecture adds a relevance check based on bootstrapped indicator weight values. Indicator weights are standardized between −1 and +1, and relevance is interpreted by magnitude: values closer to +1 or −1 indicate a stronger positive or negative relationship with the construct, while values closer to 0 indicate a weaker relationship. The closer the relevance is to one (in absolute terms), the more defensible the indicator’s retention.
A consolidated “rule of thumb” ties these diagnostics together. For redundancy/convergent validity, the path coefficient should exceed 0.708. Collinearity is evaluated with VIF-like thresholds: values above 5 are problematic, while 3 to 5 are usually not critical, and below 5 is generally acceptable. For formative indicators, weight significance is required, relevance should be closer to ±1 for stronger contribution, and loadings should be at least 0.5 and statistically significant. If loading is below 0.5 and not statistically significant, deletion becomes more defensible.
Finally, the lecture outlines an implementation workflow (with code available on the instructor’s website): load data, specify the measurement model (including formative weights for formative constructs), estimate the structural model, then bootstrap to obtain weights/loadings and inspect collinearity, significance, and relevance. The takeaway is procedural and practical: validate formative models using a layered set of thresholds, but let conceptual domain coverage—content validity—override simplistic “delete if non-significant” rules.
Cornell Notes
Formative indicator deletion decisions must balance statistics with content validity. Non-significant indicator weights should not automatically trigger removal because formative indicators are non-interchangeable and collectively define the construct’s domain. The decision sequence starts with indicator weight significance; if insignificant, check loadings (retain when loading > 0.5, consider deletion when loading < 0.5 only if content validity is not harmed). Then use bootstrapped relevance (weights standardized between −1 and +1) to judge whether an indicator contributes meaningfully—values near ±1 indicate stronger relationships than values near 0. Collinearity and convergent/redundancy checks (e.g., path coefficient > 0.708; collinearity issues above 5) provide additional guardrails.
Why can’t formative indicators be deleted just because their weights are non-significant?
What is the recommended decision sequence when an indicator weight is insignificant in a formative model?
How is “relevance” interpreted for formative indicators?
What thresholds are used as rule-of-thumb criteria for formative model assessment?
How does bootstrapping fit into validating a formative measurement model?
Review Questions
- In a formative model, what specific content-validity risk arises when an indicator is removed solely due to non-significant weight results?
- If an indicator’s weight is insignificant but its loading is 0.6, what retention decision does the lecture recommend and why?
- Which collinearity threshold is treated as problematic, and how does that influence whether deletion decisions are considered reliable?
Key Points
- 1
Do not delete formative indicators based on non-significant weights alone; protect content validity because formative indicators are non-interchangeable.
- 2
Use a layered decision process: check indicator weight significance first, then indicator loadings if weights are insignificant.
- 3
Retain indicators when loadings exceed 0.5; consider deletion when loadings are below 0.5 only if the construct’s conceptual domain remains fully covered.
- 4
Evaluate relevance from bootstrapped indicator weights standardized between −1 and +1; values near ±1 indicate stronger contribution than values near 0.
- 5
Apply collinearity diagnostics to avoid unstable estimates; treat collinearity above 5 as a serious issue.
- 6
Use redundancy/convergent validity guidance (path coefficient > 0.708) as an additional guardrail when assessing the overall measurement quality.
- 7
Implement the workflow by specifying formative weights in the measurement model and using bootstrapping to inspect weights, loadings, and relevance.