Get AI summaries of any video or article — Sign up free
15. SEMinR Series. Reporting Measurement Model Results thumbnail

15. SEMinR Series. Reporting Measurement Model Results

Research With Fawad·
5 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Estimate the PLS-SEM model in SEMinR, then store results in the summary object to serve as the source for reporting.

Briefing

Once a PLS-SEM model is estimated in R with the measurement and structural models set up, results can be reported in a thesis or paper by pulling key measurement-model outputs from the SEMinR “summary” object and exporting them into clean tables. The core workflow is: estimate the model, store outputs in the summary object, then extract factor loadings, reliability, convergent validity, and discriminant validity results—typically by writing them to CSV files—so they can be pasted into Word with consistent formatting (e.g., rounding to three decimals).

Reporting starts with factor loadings for each indicator. After loading data, checking it, specifying the measurement model with multiple latent variables and the structural model with structural relationships, the PLS algorithm is run and the results are saved in a summary object. Calling the relevant subobject (via the object’s loading-related component) displays the factor loadings directly in R. Instead of manually copying from the console, the workflow recommends exporting the loadings to a CSV file in the same directory as the R script, then opening that file to copy into a paper table. For presentation, the transcript suggests formatting numeric cells to three decimal places and optionally removing leading zeros. In a thesis-style table, loadings can be organized under headings for each construct (e.g., “Vision,” “Development”), with reliability and validity columns placed alongside the loadings.

Next comes reliability analysis. The same general approach applies: extract reliability results from the SEMinR outputs, export them to a CSV file, and format them to three decimals before pasting into the document. The reliability table includes the values needed for reliability reporting (the transcript specifically references AV values as part of the reliability table layout). After reliability, convergent validity is reported using a convergent validity function, which produces results indicating whether convergent validity was established. Those outputs are then copied into a dedicated table.

Finally, discriminant validity is handled through a discriminant validity function that generates multiple assessment methods. The transcript lists the specific discriminant validity checks to report: the Fornell–Larcker Criterion, cross loadings, and HTMT. Each of these results can be exported to CSV from the same directory as the R file, then pasted into the thesis as part of a discriminant validity table. Taken together, the process turns SEMinR’s measurement-model outputs into publication-ready tables: factor loadings first, then reliability and convergent validity, and concluding with discriminant validity using multiple criteria.

Cornell Notes

After estimating a PLS-SEM model in R with SEMinR, measurement-model results are reported by extracting outputs from the stored summary object and exporting them to CSV for easy pasting into a thesis. The first table typically reports factor loadings for each indicator, formatted to three decimals and organized by construct headings. Next comes reliability reporting, again exported and formatted, followed by convergent validity results produced by a convergent validity function (including whether validity is established). The final measurement-model section reports discriminant validity using multiple checks—Fornell–Larcker Criterion, cross loadings, and HTMT—generated by a discriminant validity function and saved to CSV in the same directory as the R script. This workflow keeps reporting consistent and saves time versus manual copying from the console.

How do factor loadings get from SEMinR output to a publication-ready table?

Factor loadings are pulled from the SEMinR summary object after the model is estimated. The workflow is: estimate the PLS-SEM model, store results in the summary object, then call the loadings subobject to display the loadings. Instead of copying from the console, export the loadings to a CSV file in the same directory as the R script (using a write.csv-style step). Open the CSV, format numeric cells to three decimal places, copy the table into Word, and optionally remove leading zeros. Organize the loadings under construct headings (e.g., “Vision,” “Development”) and place reliability/validity columns alongside if desired.

What is the recommended approach for reliability reporting once factor loadings are done?

Reliability results are extracted from the SEMinR outputs (from the summary object’s reliability-related results), then written to a CSV file in the same directory as the R script. The CSV is opened and numeric formatting is applied—commonly rounding to three decimals—before copying into the thesis. The reliability table layout includes the reliability-related values (the transcript notes AV values as part of the reliability table structure) and is formatted to match the factor-loading table style.

How is convergent validity reported, and what should be captured in the table?

Convergent validity is generated using a convergent validity function. The output indicates whether convergent validity was established. That established/not-established result (along with any associated values produced by the function) is then copied into a dedicated convergent validity table in the thesis, using the same CSV-to-Word workflow and consistent numeric formatting.

Which discriminant validity tests should be reported for a reflective measurement model?

Discriminant validity is produced by a discriminant validity function and should include multiple assessment methods: the Fornell–Larcker Criterion, cross loadings, and HTMT. Each method’s results are exported to CSV and then placed into the discriminant validity section of the thesis, typically as a combined set of tables or a single table with clearly labeled components.

Why does exporting to CSV matter in the reporting workflow?

Exporting to CSV avoids slow, error-prone manual copying from the R console. CSV files can be opened in a spreadsheet, where numeric formatting (like three-decimal rounding) can be applied quickly and consistently. That makes it easier to paste clean, publication-ready tables into Word and to keep the formatting aligned across factor loadings, reliability, convergent validity, and discriminant validity.

Review Questions

  1. What sequence of measurement-model outputs should be reported first, second, and last (factor loadings, reliability/convergent validity, discriminant validity)?
  2. Which discriminant validity methods are explicitly named as reportable checks, and how are they obtained in SEMinR?
  3. How does the CSV export-and-format step improve the quality and speed of thesis tables compared with copying directly from R output?

Key Points

  1. 1

    Estimate the PLS-SEM model in SEMinR, then store results in the summary object to serve as the source for reporting.

  2. 2

    Extract factor loadings from the summary object, export them to CSV, and format to three decimals before pasting into Word tables.

  3. 3

    Report reliability by exporting reliability outputs to CSV and applying consistent numeric formatting (e.g., three-decimal rounding).

  4. 4

    Use the convergent validity function to generate convergent validity results and capture whether convergent validity was established.

  5. 5

    Report discriminant validity using multiple criteria—Fornell–Larcker Criterion, cross loadings, and HTMT—generated by the discriminant validity function.

  6. 6

    Keep all exported CSV files in the same directory as the R script to streamline the thesis-writing workflow.

  7. 7

    Organize tables by construct headings and align loadings with reliability/validity columns for a clean, publication-ready layout.

Highlights

Factor loadings are best exported from SEMinR to CSV for clean formatting (three decimals) rather than copied directly from console output.
Reliability and convergent validity follow the same extract → export → format → paste pattern, keeping tables consistent across sections.
Discriminant validity reporting should include Fornell–Larcker Criterion, cross loadings, and HTMT, not just a single metric.

Mentioned

  • PLS-SEM
  • HTMT