Get AI summaries of any video or article — Sign up free
16. SEMinR Lecture Series | Evaluating Formative Measurement Model | Introduction thumbnail

16. SEMinR Lecture Series | Evaluating Formative Measurement Model | Introduction

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Formative constructs in SEMinR must be declared explicitly using “mode B” (via the weights argument in the composite specification) to ensure indicator weights are estimated correctly.

Briefing

Formative measurement models in SEMinR (PLS-SEM) require a different evaluation workflow than reflective ones, and the practical difference starts at model specification. When constructs are formative, they must be declared as such—typically by setting the measurement mode to “mode B” in the composite specification—so the PLS path model estimates formative indicator weights rather than treating indicators as interchangeable reflections of an underlying latent variable. In contrast, reflective constructs use “mode A” by default and are evaluated with reliability and validity checks.

The lecture frames the overall goal: evaluating formative measurement models using criteria such as convergent validity (for the relevant parts of the model), indicator collinearity, statistical significance, and the relevance of indicator weights. It then walks through an applied example with three formative constructs—Vision development, and Reward—measured by multiple items, which jointly predict a reflective construct called Collaborative culture. This mixed measurement setup matters because only the reflective construct receives the standard reflective measurement assessment (reliability and validity), while formative constructs are assessed through weight-related diagnostics.

On the implementation side, the workflow begins with loading the dataset in R and defining the measurement model. For formative constructs, the composite function is updated with a “weights” argument that specifies “mode B” (or explicitly sets “mode A” for reflective constructs and “mode B” for formative ones). The example emphasizes that omitting this step leaves constructs treated as reflective by default, which would be incorrect for formative constructs.

Next comes model estimation. The PLS path model is estimated by calling the appropriate SEMinR estimation function (using the specified measurement model and structural model). The structural model is defined through the relationships between constructs—here, the three formative predictors affecting the reflective outcome. After estimation, results are retrieved via a summary object (stored in a user-named variable), and the output can be explored with targeted calls (e.g., using dollar-sign accessors for specific result sections).

A key quality-control step is checking missing data and algorithm behavior. The summary output includes descriptive statistics with the number of missing observations, and the iteration diagnostics (e.g., maximum iterations set to 300). In the example, the run reports all observations valid, shows no missing values, and stops well before the maximum iteration count—signals that the estimation process is stable.

Finally, the lecture distinguishes what to evaluate for reflective versus formative constructs. The reflective construct (Collaborative culture) is assessed using reliability and validity outputs such as HTMT (heterotrait–monotrait correlations) to evaluate discriminant validity, and reliability metrics to confirm internal consistency. The formative constructs (Vision development, and Reward) are not subjected to reflective reliability/validity metrics; instead, their evaluation focuses on formative-specific criteria like indicator collinearity and the significance/relevance of their estimated weights. The session closes by previewing the next step in the formative evaluation sequence: establishing convergent validity.

Cornell Notes

The session lays out how to evaluate a mixed measurement model in SEMinR/PLS-SEM, where some constructs are formative and one is reflective. Formative constructs must be declared explicitly using “mode B” (via the weights argument in the composite specification), otherwise they default to reflective estimation. After estimating the PLS path model, the workflow checks missing data and iteration behavior, then retrieves results from the summary object for diagnostics. Reflective constructs are assessed with reliability and validity outputs such as HTMT and reliability metrics, while formative constructs are assessed using formative criteria—especially indicator collinearity and the significance/relevance of indicator weights. This distinction drives both the coding choices and the evaluation steps.

Why does SEMinR require an explicit “mode B” declaration for formative constructs?

Formative constructs are not treated as latent variables that “cause” their indicators. Instead, their indicators form the construct, so SEMinR must estimate indicator weights. In the composite specification, the weights argument controls the measurement mode: “mode B” signals formative measurement. If weights/mode are omitted, SEMinR defaults to reflective behavior (mode A), which would incorrectly apply reflective reliability/validity logic to a formative construct.

What is the practical workflow after specifying the measurement and structural models?

Once the measurement model (including which constructs are formative vs reflective) and the structural model (the paths among constructs) are defined, the model is estimated using the SEMinR PLS estimation function. Results are stored in a summary object (e.g., a variable created by the summary call). From there, specific result sections can be accessed (using dollar-sign accessors), including validity diagnostics and reliability outputs.

How should missing data and estimation stability be checked before interpreting measurement results?

The summary output includes descriptive statistics nested inside the summary return object, reporting the number of missing observations. The iteration diagnostics also show the maximum number of iterations reached (with a typical cap such as 300). If all observations are valid, missing values are zero, and the algorithm converges well before the maximum iterations, the run is generally stable and ready for interpretation.

Which diagnostics apply to reflective constructs in a mixed model?

Reflective constructs are evaluated using reflective measurement criteria: reliability and validity. The session highlights HTMT (heterotrait–monotrait correlations) for discriminant validity checks and reliability metrics for internal consistency. In the example, Collaborative culture is reflective, so these reflective diagnostics are used for it.

Which diagnostics apply to formative constructs instead of reflective reliability/validity?

Formative constructs are evaluated through formative-specific criteria. The lecture lists indicator collinearity, statistical significance, and relevance of indicator weights as key evaluation steps. As a result, reliability and HTMT-style discriminant validity checks are not applied to formative constructs in the same way they are for reflective ones.

Review Questions

  1. In SEMinR, what coding change distinguishes a formative construct from a reflective one in the composite specification?
  2. After estimating a PLS path model, which two diagnostic checks should be performed before interpreting measurement results?
  3. In a model with both formative and reflective constructs, which evaluation criteria apply to each type, and why does that split matter?

Key Points

  1. 1

    Formative constructs in SEMinR must be declared explicitly using “mode B” (via the weights argument in the composite specification) to ensure indicator weights are estimated correctly.

  2. 2

    Reflective constructs can rely on the default reflective mode (“mode A”), and they are evaluated with reliability and validity diagnostics.

  3. 3

    A mixed measurement model requires different evaluation logic: reflective constructs use HTMT/reliability checks, while formative constructs use weight-focused criteria.

  4. 4

    Model estimation is followed by retrieving results from a summary object, where specific diagnostics can be accessed directly.

  5. 5

    Before interpreting results, check missing observations and iteration behavior (e.g., whether the algorithm converges well below the maximum iterations).

  6. 6

    Indicator collinearity and the statistical significance/relevance of formative indicator weights are central to formative measurement evaluation.

  7. 7

    When formative and reflective constructs coexist, reflective validity/reliability outputs should not be applied to formative constructs.

Highlights

The key implementation step for formative measurement is setting the composite’s weights/mode to “mode B”; otherwise SEMinR treats the construct as reflective by default.
In mixed models, reflective diagnostics (like HTMT and reliability) apply only to reflective constructs, while formative constructs are assessed via indicator weight relevance and collinearity.
Convergence and data quality checks—missing values and iteration counts—should be reviewed from the summary output before drawing conclusions.
The structural model is defined through relationships among constructs, with the formative predictors pointing to the reflective outcome in the example.

Topics

  • Formative Measurement
  • Reflective Measurement
  • SEMinR
  • PLS-SEM
  • Indicator Weights

Mentioned

  • PLS
  • SEM
  • HTMT