Does General Intelligence Deflate Standardized Effect Sizes of Cognitive Sex Differences?
The effect size d tries to quantify the difference between two distributions by reporting the difference between the distributions' means in standardized units—units that have been scaled to take into account how "spread out" the data is. This gives us a common reference scale for how big a given statistical difference is. Height is measured in meters, and "Agreeableness" in the Big Five personality model is an abstract construct that doesn't even have natural units, and yet there's still a meaningful sense in which we can say that the sex difference in height (d≈1.7) is "about three times larger" than the sex difference in Agreeableness (d≈0.5).3
Cohen's d is computed as the difference in group means, divided by the square root of the pooled variance. Thus, holding actual sex differences constant, more measurement error means more variance, which means smaller values of d. Here's some toy Python code illustrating this effect:4
from math import sqrt from statistics import mean, variance from numpy.random import normal, seed # seed the random number generator for reproducibility of figures in later # comments; commment this out to run a new experiment seed(1) # https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number def cohens_d(X, Y): return ( (mean(X) - mean(Y)) / sqrt( (len(X)*variance(X) + len(Y)*variance(Y)) / (len(X) + len(Y)) ) ) def population_with_error(μ, ε, n): def trait(): return normal(μ, 1) def measurement_error(): return normal(0, ε) return [trait() + measurement_error() for _ in range(n)] # trait differs by 1 standard deviation true_f = population_with_error(1, 0, 10000) true_m = population_with_error(0, 0, 10000) # as above, but with 0.5 standard units measurment error measured_f = population_with_error(1, 0.5, 10000) measured_m = population_with_error(0, 0.5, 10000) true_d = cohens_d(true_f, true_m) print(true_d) # 1.0069180384313943 — d≈1.0, as expected! naïve_d = cohens_d(measured_f, measured_m) print(naïve_d) # 0.9012430127962895 — deflated!
But doesn't a similar argument hold for non-error sources of variance that are "orthogonal" to the group difference? Suppose performance on some particular cognitive task can be modeled as the sum of the general intelligence factor (zero or negligible sex difference), and a special ability factor that does show sex differences.5 Then, even with zero measurement error, d would underestimate the difference between women and men of the same general intelligence—
def performance(μ_g, σ_g, s, n): def general_ability(): return normal(μ_g, σ_g) def special_ability(): return normal(s, 1) return [general_ability() + special_ability() for _ in range(n)] # ♀ one standard deviation better than ♂ at the special factor population_f = performance(0, 1, 1, 10000) population_m = performance(0, 1, 0, 10000) # ... but suppose we control/match for general intelligence matched_f = performance(0, 0, 1, 10000) matched_m = performance(0, 0, 0, 10000) population_d = cohens_d(population_f, population_m) print(population_d) # 0.7413662423265308 — deflated! matched_d = cohens_d(matched_f, matched_m) print(matched_d) # 1.0346898918452228 — as you would expect
- I was telling friend of the blog Tailcalled the other week that we really need to start a Marco del Guidice Fan Club! ↩
- Marco del Guidice, "Measuring Sex Differences and Similarities", §2.3.3, "Measurement Error and Other Artifacts" ↩
- Yanna J. Weisberg, Colin G. DeYoung, and Jacob B. Hirsh, "Gender Differences in Personality across the Ten Aspects of the Big Five", Table 2 ↩
- Special thanks to Tailcalled for catching a bug in the initially published version of this code. ↩
- Arthur Jensen, The g Factor, Chapter 13: "Although no evidence was found for sex differences in the mean level of g or in the variability of g, there is clear evidence of marked sex differences in group factors and in test specificity. Males, on average, excel on some factors; females on others. [...] But the best available evidence fails to show a sex difference in g." ↩