What is R Square, F Square, and Q Square in PLS-SEM (SmartPLS)
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
R square (R²) quantifies explained variance in an endogenous construct by its exogenous predictors; convert to a percentage by multiplying by 100.
Briefing
R square, F square, and Q square are three PLS-SEM metrics that answer different questions about a structural model: how much variance the model explains, how much each predictor matters, and whether the model has predictive relevance. R square quantifies explained variance in an endogenous (dependent) construct. In plain terms, it tells how much change in the dependent variable can be accounted for by one or more independent variables.
In the example used to interpret R square, a dependent variable Y influenced by X1, X2, and X3 has an R square of 0.623. That translates to 62.3% of the variance in Y being explained by those predictors. In SmartPLS, interpretation is guided by the arrows pointing toward the endogenous construct(s): each endogenous construct gets its own R square value.
Thresholds for judging whether an R square is “adequate” vary across researchers. Chin (1998) proposed benchmarks of 0.26 for substantial, 0.13 for moderate, and 0.02 for weak explanatory power. Later, Hair and colleagues suggested higher cutoffs for endogenous latent variables—around 0.25 (substantial), with 0.50 and 0.75 often used as stronger benchmarks in broader SEM practice—framing 0.25 as a minimum for a meaningful level of explained variance. The key takeaway is that R square is not just a number; it’s evaluated against accepted effect-size guidelines.
F square shifts the focus from overall explained variance to the contribution of specific exogenous variables. F square is computed as the change in R square when an exogenous construct is removed from the model. If removing a predictor barely changes R square, that predictor has little practical impact. Common effect-size guidance (Cohen, 1988) treats F square values above 0.02 as small, above 0.15 as medium, and above 0.35 as large. In practice, this helps identify which predictors are worth keeping versus which may be redundant.
Q square addresses a different concern: prediction. Q square (predictive relevance) is assessed via SmartPLS’s blindfolding procedure, which systematically omits data points and checks how well the model reconstructs them. A Q square greater than zero indicates predictive relevance for the endogenous construct. The transcript also notes that blindfolding requires selecting an omission distance D, typically between 5 and 12, with 7 used as a default in the example.
A worked SmartPLS example ties everything together using a loyalty outcome predicted by satisfaction, service quality, Corporate social responsibility (CSR), and image. The model reports an R square of 0.719, meaning 71.9% of the variance in loyalty is explained by those predictors. F square results indicate that removing image has a substantial impact on R square, removing CSR and satisfaction affects it, and removing service quality does not meaningfully change R square—suggesting service quality could be removed to improve parsimony. Finally, blindfolding yields a Q square of 0.410, which is well above zero, indicating substantial predictive relevance. Together, these metrics provide a structured way to judge explanation strength, predictor importance, and out-of-sample predictive usefulness in PLS-SEM.
Cornell Notes
R square (R²) measures how much variance in an endogenous construct is explained by its exogenous predictors in PLS-SEM. F square (f²) measures the effect size of each exogenous construct by calculating how much R² changes when that predictor is removed. Q square (Q²) tests predictive relevance using SmartPLS’s blindfolding procedure; Q² greater than zero indicates the model has predictive relevance. In the SmartPLS example, loyalty has R² = 0.719 (71.9% explained variance), image shows a substantial f² impact, service quality shows no meaningful f² impact, and loyalty has Q² = 0.410 (substantial predictive relevance). These metrics help evaluate explanation, importance of predictors, and prediction quality.
What does R square (R²) mean in PLS-SEM, and how is it interpreted as a percentage?
How do researchers suggest judging whether an R² value is substantial, moderate, or weak?
What is F square (f²), and how is it computed conceptually in a structural model?
How does SmartPLS determine whether the effects behind F² are statistically significant?
What is Q square (Q²) and how does blindfolding establish predictive relevance?
Review Questions
- If an endogenous construct has R² = 0.719, what does that imply about the proportion of variance explained, and what information do you need in SmartPLS to identify which construct’s R² you’re reading?
- A predictor has f² = 0.12. Based on the thresholds mentioned, how would you classify its effect size, and what would you expect to happen to R² if it were removed?
- What conditions on Q² indicate predictive relevance, and what role does the blindfolding omission distance D play in computing Q²?
Key Points
- 1
R square (R²) quantifies explained variance in an endogenous construct by its exogenous predictors; convert to a percentage by multiplying by 100.
- 2
R² interpretation uses benchmark thresholds that vary by author, such as Chin’s 0.26 (substantial), 0.13 (moderate), and 0.02 (weak).
- 3
F square (f²) measures each predictor’s contribution by calculating the change in R² when that exogenous variable is removed.
- 4
Cohen’s effect-size guidance for f² is commonly used: >0.02 small, >0.15 medium, and >0.35 large.
- 5
F² significance is typically checked with bootstrapping in SmartPLS (e.g., complete bootstrapping with a chosen number of path samples).
- 6
Q square (Q²) tests predictive relevance through blindfolding; Q² > 0 indicates predictive relevance.
- 7
Blindfolding requires selecting an omission distance D (often between 5 and 12); the example uses D = 7 and reports Q² = 0.410 for loyalty.