16. SEMinR Lecture Series | Evaluating Formative Measurement Model | Introduction
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Formative constructs in SEMinR must be declared explicitly using “mode B” (via the weights argument in the composite specification) to ensure indicator weights are estimated correctly.
Briefing
Formative measurement models in SEMinR (PLS-SEM) require a different evaluation workflow than reflective ones, and the practical difference starts at model specification. When constructs are formative, they must be declared as such—typically by setting the measurement mode to “mode B” in the composite specification—so the PLS path model estimates formative indicator weights rather than treating indicators as interchangeable reflections of an underlying latent variable. In contrast, reflective constructs use “mode A” by default and are evaluated with reliability and validity checks.
The lecture frames the overall goal: evaluating formative measurement models using criteria such as convergent validity (for the relevant parts of the model), indicator collinearity, statistical significance, and the relevance of indicator weights. It then walks through an applied example with three formative constructs—Vision development, and Reward—measured by multiple items, which jointly predict a reflective construct called Collaborative culture. This mixed measurement setup matters because only the reflective construct receives the standard reflective measurement assessment (reliability and validity), while formative constructs are assessed through weight-related diagnostics.
On the implementation side, the workflow begins with loading the dataset in R and defining the measurement model. For formative constructs, the composite function is updated with a “weights” argument that specifies “mode B” (or explicitly sets “mode A” for reflective constructs and “mode B” for formative ones). The example emphasizes that omitting this step leaves constructs treated as reflective by default, which would be incorrect for formative constructs.
Next comes model estimation. The PLS path model is estimated by calling the appropriate SEMinR estimation function (using the specified measurement model and structural model). The structural model is defined through the relationships between constructs—here, the three formative predictors affecting the reflective outcome. After estimation, results are retrieved via a summary object (stored in a user-named variable), and the output can be explored with targeted calls (e.g., using dollar-sign accessors for specific result sections).
A key quality-control step is checking missing data and algorithm behavior. The summary output includes descriptive statistics with the number of missing observations, and the iteration diagnostics (e.g., maximum iterations set to 300). In the example, the run reports all observations valid, shows no missing values, and stops well before the maximum iteration count—signals that the estimation process is stable.
Finally, the lecture distinguishes what to evaluate for reflective versus formative constructs. The reflective construct (Collaborative culture) is assessed using reliability and validity outputs such as HTMT (heterotrait–monotrait correlations) to evaluate discriminant validity, and reliability metrics to confirm internal consistency. The formative constructs (Vision development, and Reward) are not subjected to reflective reliability/validity metrics; instead, their evaluation focuses on formative-specific criteria like indicator collinearity and the significance/relevance of their estimated weights. The session closes by previewing the next step in the formative evaluation sequence: establishing convergent validity.
Cornell Notes
The session lays out how to evaluate a mixed measurement model in SEMinR/PLS-SEM, where some constructs are formative and one is reflective. Formative constructs must be declared explicitly using “mode B” (via the weights argument in the composite specification), otherwise they default to reflective estimation. After estimating the PLS path model, the workflow checks missing data and iteration behavior, then retrieves results from the summary object for diagnostics. Reflective constructs are assessed with reliability and validity outputs such as HTMT and reliability metrics, while formative constructs are assessed using formative criteria—especially indicator collinearity and the significance/relevance of indicator weights. This distinction drives both the coding choices and the evaluation steps.
Why does SEMinR require an explicit “mode B” declaration for formative constructs?
What is the practical workflow after specifying the measurement and structural models?
How should missing data and estimation stability be checked before interpreting measurement results?
Which diagnostics apply to reflective constructs in a mixed model?
Which diagnostics apply to formative constructs instead of reflective reliability/validity?
Review Questions
- In SEMinR, what coding change distinguishes a formative construct from a reflective one in the composite specification?
- After estimating a PLS path model, which two diagnostic checks should be performed before interpreting measurement results?
- In a model with both formative and reflective constructs, which evaluation criteria apply to each type, and why does that split matter?
Key Points
- 1
Formative constructs in SEMinR must be declared explicitly using “mode B” (via the weights argument in the composite specification) to ensure indicator weights are estimated correctly.
- 2
Reflective constructs can rely on the default reflective mode (“mode A”), and they are evaluated with reliability and validity diagnostics.
- 3
A mixed measurement model requires different evaluation logic: reflective constructs use HTMT/reliability checks, while formative constructs use weight-focused criteria.
- 4
Model estimation is followed by retrieving results from a summary object, where specific diagnostics can be accessed directly.
- 5
Before interpreting results, check missing observations and iteration behavior (e.g., whether the algorithm converges well below the maximum iterations).
- 6
Indicator collinearity and the statistical significance/relevance of formative indicator weights are central to formative measurement evaluation.
- 7
When formative and reflective constructs coexist, reflective validity/reliability outputs should not be applied to formative constructs.