Get AI summaries of any video or article — Sign up free
18. SEMinR Lecture Series - Evaluating Formative Model | Step 3 | Indicator Weights thumbnail

18. SEMinR Lecture Series - Evaluating Formative Model | Step 3 | Indicator Weights

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Formative indicator weights are evaluated as the final step after convergent validity and collinearity checks.

Briefing

Formative measurement model evaluation reaches its final checkpoint by testing whether each indicator’s weight is statistically meaningful and whether the indicator truly contributes to forming the construct. In practice, that means running a bootstrap procedure to test the significance and relevance (size) of indicator weights, then using confidence intervals and indicator loadings to decide whether weak or insignificant weights should trigger indicator removal. The stakes are straightforward: formative models can’t rely on internal consistency alone, so indicator-level evidence has to guide retention or deletion.

The process starts after collinearity and convergent validity checks are already completed. For the “indicator weights” step, significance testing depends on bootstrapping. The workflow uses a SEMinR bootstrap function with a SEMinR model specified as the bootstrap target. A large number of bootstrap samples is recommended—10,000 for final reporting—though a smaller run (e.g., 1,000) can be used for preliminary estimation to save computation time. To speed things up, the procedure can automatically detect and use the maximum number of cores available on the machine (via a parallel cores option). A seed can make results reproducible, but it’s suggested to keep defaults unless strict repeatability is required.

After bootstrapping, the bootstrap output is stored in a summary object (via a summary function) that takes the bootstrap results and an alpha level. The transcript uses the default alpha = 0.05 for two-tailed testing, aligning with the convention for indicator weight significance. The resulting output provides indicator weights along with T values (and related statistics). A common rule of thumb is that, for a two-tailed 5% test, T values above 1.96 indicate statistical significance. The transcript also notes other critical thresholds for different alpha levels (e.g., 1% and 10%), but the decision framework in this run centers on 1.96.

In the example output, some formative indicators show significant weights while others fall short of the T-value threshold. That outcome alone doesn’t automatically justify deleting indicators. Instead, confidence intervals offer a second lens: if a weight’s confidence interval does not include zero, the weight is treated as significant and the indicator can be retained; if the interval includes zero, the weight is not statistically significant at the chosen alpha and the indicator becomes a candidate for removal.

When weights are insignificant, the modeler must still check absolute contribution through indicator loadings. The transcript treats loadings of 0.5 or higher as evidence of meaningful absolute contribution. Importantly, even an indicator with a lower weight can still be retained if its loading is strong and its confidence interval excludes zero. In the worked example, indicator weights are mixed—some are insignificant—but the indicator loadings are reported as consistently above 0.5, and the confidence-interval logic supports keeping the indicators. The session ends with the takeaway that indicator retention in formative models depends on both statistical significance (via T values and confidence intervals) and practical contribution (via loadings), not on weight significance alone.

Cornell Notes

Formative model evaluation’s final step tests whether each indicator’s weight is statistically significant and whether the indicator meaningfully contributes to forming the construct. Significance is assessed through bootstrapping (recommended 10,000 samples for final results) and then summarizing bootstrap outputs at a chosen alpha level—here, alpha = 0.05 with two-tailed testing. Indicator weights are judged significant when T values exceed 1.96 and when bias-corrected confidence intervals exclude zero. If weights are insignificant, deletion isn’t automatic; indicator loadings are checked for absolute contribution, with loadings ≥ 0.5 treated as strong evidence to retain indicators. In the example, mixed weight significance still results in retaining all formative indicators because loadings and confidence-interval behavior support their contribution.

Why does formative indicator weight significance require bootstrapping rather than relying on standard parametric tests?

Indicator weight significance in formative models is assessed using a bootstrap procedure. The workflow runs a SEMinR bootstrap model (with the SEMinR model specified) and then summarizes the bootstrap results with an alpha level. The bootstrap output provides T values and confidence intervals for indicator weights, enabling two-tailed significance testing at the chosen alpha (e.g., 0.05).

How do T values translate into “significant” indicator weights at alpha = 0.05?

With two-tailed testing at alpha = 0.05, the critical threshold is T > 1.96. Weights with T values above 1.96 are treated as statistically significant; those below are treated as insignificant at that significance level. The transcript also lists other thresholds for other alphas (1% and 10%), but the example focuses on 5%.

What does it mean if an indicator weight’s confidence interval includes zero?

A confidence interval that includes zero indicates the indicator weight is not statistically significant at the given alpha (here, 5%). Under that criterion, the indicator becomes a candidate for removal. Conversely, if the confidence interval excludes zero, the weight is considered significant and the indicator can be retained.

If an indicator weight is insignificant, why might the indicator still be retained?

Insignificant weights don’t automatically imply poor measurement quality. The transcript emphasizes checking absolute contribution via indicator loadings. Loadings ≥ 0.5 suggest the indicator contributes meaningfully to forming the construct. An indicator can have a low or insignificant weight yet still be retained if its loading is strong and its confidence interval behavior supports contribution (e.g., no zero in the interval).

What practical decision rule emerges from combining weight significance and loadings?

Retention depends on both statistical evidence and contribution. The example retains indicators even when some weights are insignificant because the indicators’ loadings are reported as above 0.5 and the confidence-interval logic supports meaningful contribution. The session’s end message is that the decision to delete indicators should be based on this combined evidence, not on weight significance alone.

Review Questions

  1. What bootstrap settings (sample size, cores, alpha) are used to test formative indicator weights, and why does alpha matter?
  2. How do you decide significance using both T values and confidence intervals for indicator weights?
  3. When would you check indicator loadings even if an indicator weight is not significant, and what loading threshold is used as a benchmark?

Key Points

  1. 1

    Formative indicator weights are evaluated as the final step after convergent validity and collinearity checks.

  2. 2

    Significance testing for indicator weights relies on bootstrapping, with 10,000 bootstrap samples recommended for final reporting.

  3. 3

    Use alpha = 0.05 for two-tailed testing; at this level, T values above 1.96 indicate significant indicator weights.

  4. 4

    Confidence intervals guide significance decisions: intervals that exclude zero support retaining an indicator; intervals that include zero suggest non-significance.

  5. 5

    Insignificant weights do not automatically justify deletion; indicator loadings must be checked for absolute contribution.

  6. 6

    Indicator loadings of 0.5 or higher are treated as evidence of meaningful absolute contribution, supporting retention even when some weights are weak.

  7. 7

    A combined decision framework—T values/confidence intervals plus loadings—determines whether formative indicators stay in the measurement model.

Highlights

Indicator weight significance in formative models is determined through bootstrapping and two-tailed testing at alpha = 0.05, using T > 1.96 as the quick significance threshold.
Confidence intervals provide a direct rule: if the interval includes zero, the indicator weight is not significant at the chosen alpha.
Even when some indicator weights are insignificant, indicators can still be retained if indicator loadings show strong absolute contribution (≥ 0.5).
The session’s worked example retains all formative indicators despite mixed weight significance because loadings and confidence-interval behavior support their contribution.

Topics

Mentioned

  • SEM
  • T
  • CPU