Get AI summaries of any video or article — Sign up free
19. SEMinR Lecture Series - When to Delete or Not to Delete a Formative Indicator thumbnail

19. SEMinR Lecture Series - When to Delete or Not to Delete a Formative Indicator

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Do not delete formative indicators based on non-significant weights alone; protect content validity because formative indicators are non-interchangeable.

Briefing

Formative measurement models require a careful, content-driven deletion decision: non-significant indicator weights alone are not enough to justify removing an indicator, because doing so can break the construct’s domain coverage. The core rule is to treat formative indicators as non-interchangeable “building blocks” defined during conceptualization—so deletion must be justified by a combination of statistical evidence and relevance checks, while protecting content validity.

After checking collinearity and examining indicator weights/loadings, the decision process starts with indicator weight significance. If an indicator’s weight is statistically significant, it can usually be retained without further concern. If the weight is insignificant, the next gate is the indicator’s loading: when loadings are above 0.5, the indicator is generally kept. When loadings fall below 0.5, the indicator may be considered for deletion—but only if that deletion does not harm content validity.

This is where the lecture draws a sharp distinction between formative and reflective measurement. In formative models, indicators are not interchangeable; removing one can make the construct incomplete. The example uses a CSR construct defined as economic, legal, ethical, and philanthropic dimensions. Dropping the “economic” dimension would sacrifice content validity even if its statistical weight is not significant, because the conceptual domain would no longer be fully represented. In short: formative indicators with non-significant weights should not be automatically removed.

Beyond weight significance and loadings, the lecture adds a relevance check based on bootstrapped indicator weight values. Indicator weights are standardized between −1 and +1, and relevance is interpreted by magnitude: values closer to +1 or −1 indicate a stronger positive or negative relationship with the construct, while values closer to 0 indicate a weaker relationship. The closer the relevance is to one (in absolute terms), the more defensible the indicator’s retention.

A consolidated “rule of thumb” ties these diagnostics together. For redundancy/convergent validity, the path coefficient should exceed 0.708. Collinearity is evaluated with VIF-like thresholds: values above 5 are problematic, while 3 to 5 are usually not critical, and below 5 is generally acceptable. For formative indicators, weight significance is required, relevance should be closer to ±1 for stronger contribution, and loadings should be at least 0.5 and statistically significant. If loading is below 0.5 and not statistically significant, deletion becomes more defensible.

Finally, the lecture outlines an implementation workflow (with code available on the instructor’s website): load data, specify the measurement model (including formative weights for formative constructs), estimate the structural model, then bootstrap to obtain weights/loadings and inspect collinearity, significance, and relevance. The takeaway is procedural and practical: validate formative models using a layered set of thresholds, but let conceptual domain coverage—content validity—override simplistic “delete if non-significant” rules.

Cornell Notes

Formative indicator deletion decisions must balance statistics with content validity. Non-significant indicator weights should not automatically trigger removal because formative indicators are non-interchangeable and collectively define the construct’s domain. The decision sequence starts with indicator weight significance; if insignificant, check loadings (retain when loading > 0.5, consider deletion when loading < 0.5 only if content validity is not harmed). Then use bootstrapped relevance (weights standardized between −1 and +1) to judge whether an indicator contributes meaningfully—values near ±1 indicate stronger relationships than values near 0. Collinearity and convergent/redundancy checks (e.g., path coefficient > 0.708; collinearity issues above 5) provide additional guardrails.

Why can’t formative indicators be deleted just because their weights are non-significant?

Formative indicators are defined to fully capture the construct’s domain, and they are not interchangeable. Removing an indicator can make the construct incomplete. The lecture’s CSR example defines CSR as economic, legal, ethical, and philanthropic dimensions; deleting “economic” would sacrifice content validity even if its weight is statistically insignificant.

What is the recommended decision sequence when an indicator weight is insignificant in a formative model?

First test indicator weight significance. If insignificant, assess indicator loadings. Loadings greater than 0.5 are generally treated as no issue (indicator retained). If loadings are below 0.5, the indicator may be considered for deletion—but only after checking whether deletion would compromise content validity.

How is “relevance” interpreted for formative indicators?

Relevance comes from bootstrapped indicator weight values standardized between −1 and +1. Relevance closer to +1 or −1 signals a strong positive or negative relationship, while relevance closer to 0 signals a weaker relationship. This helps decide whether an indicator’s contribution is substantively meaningful even when statistical results are mixed.

What thresholds are used as rule-of-thumb criteria for formative model assessment?

The lecture lists: convergent validity via redundancy analysis where the path coefficient should be greater than 0.708; collinearity where values greater than 5 are an issue and 3 to 5 is usually uncritical; indicator weights should be significant; relevance should be closer to one (stronger) rather than near zero (weaker); and loadings should be > 0.5 and statistically significant. If loading is < 0.5 and not statistically significant, deletion is more defensible.

How does bootstrapping fit into validating a formative measurement model?

Bootstrapping is used to obtain the sampling distribution for formative indicator weights and to generate the results stored in a summary object. Those outputs support inspection of weights, loadings, collinearity diagnostics, and the relevance values used in the deletion/retention logic.

Review Questions

  1. In a formative model, what specific content-validity risk arises when an indicator is removed solely due to non-significant weight results?
  2. If an indicator’s weight is insignificant but its loading is 0.6, what retention decision does the lecture recommend and why?
  3. Which collinearity threshold is treated as problematic, and how does that influence whether deletion decisions are considered reliable?

Key Points

  1. 1

    Do not delete formative indicators based on non-significant weights alone; protect content validity because formative indicators are non-interchangeable.

  2. 2

    Use a layered decision process: check indicator weight significance first, then indicator loadings if weights are insignificant.

  3. 3

    Retain indicators when loadings exceed 0.5; consider deletion when loadings are below 0.5 only if the construct’s conceptual domain remains fully covered.

  4. 4

    Evaluate relevance from bootstrapped indicator weights standardized between −1 and +1; values near ±1 indicate stronger contribution than values near 0.

  5. 5

    Apply collinearity diagnostics to avoid unstable estimates; treat collinearity above 5 as a serious issue.

  6. 6

    Use redundancy/convergent validity guidance (path coefficient > 0.708) as an additional guardrail when assessing the overall measurement quality.

  7. 7

    Implement the workflow by specifying formative weights in the measurement model and using bootstrapping to inspect weights, loadings, and relevance.

Highlights

Formative indicators are non-interchangeable; deleting one can make a construct incomplete even when its statistical weight is not significant.
A practical deletion gate is: insignificant weight → check loading; keep when loading > 0.5, consider deletion when loading < 0.5 only if content validity survives.
Relevance is read from bootstrapped standardized weights (−1 to +1): closer to ±1 means a stronger relationship than values near 0.
The lecture pairs statistical thresholds (e.g., path coefficient > 0.708; collinearity > 5 problematic; loading > 0.5) with a content-validity override for formative models.

Topics

  • Formative Indicators
  • Indicator Deletion
  • Content Validity
  • Bootstrapping Relevance
  • Collinearity Thresholds