+We do not have a discipline of Actual Social Science. Possibly because we're not smart enough to do it, but perhaps more so because we're not smart enough to _want_ to do it. No one has an incentive to lie about the homotopy groups of an _n_-sphere. If you're asking questions about homotopy groups _at all_, you almost certainly care about getting _the right answer for the right reasons_. At most, you might be biased towards believing your own conjectures in the optimistic hope of achieving eternal algebraic-topology fame and glory, like Ruth Lawrence. But nothing about algebraic topology is going to be [_morally threatening_](/2019/Jan/interlude-xvi/) in a way that will leave you fearing that your ideological enemies have siezed control of the publishing-houses to plant lies in the textbooks to fuck with your head, or sobbing that a malicious God created the universe as a place of evil.
+
+Okay, maybe that was a bad example; topology in general really is the kind of mindfuck that might be the design of an adversarial agency. (Remind me to tell you about the long line, which is like the line of real numbers, except much longer.)
+
+In any case, as soon as we start to ask questions _about humans_—and far more so _identifiable groups_ of humans—we end up entering the domain of _politics_.
+
+We really _shouldn't_. Everyone _should_ perceive a common interest in true beliefs—maps that reflect the territory, [simple theories](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor) that [predict our observations](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)—because beliefs that make accurate predictions are _useful_ for making good decisions. That's what "beliefs" are _for_, evolutionarily speaking: my analogues in humanity's environment of evolutionary adaptedness were better off believing that (say) the berries from some bush were good to eat if and only if the berries were _actually_ good to eat. If my analogues unduly-optimistically thought the berries were good when they actually weren't, they'd get sick (and lose fitness), but if they unduly-pessimistically thought the berries were not good when they actually were, they'd miss out on valuable calories (and fitness).
+
+(Okay, this story is actually somewhat complicated by the fact that [evolution didn't "figure out" how to build brains](https://www.lesswrong.com/posts/gTNB9CQd5hnbkMxAG/protein-reinforcement-and-dna-consequentialism) that [keep track of probability and utility separately](https://plato.stanford.edu/entries/decision-theory/): my analogues in the environment of evolutionary adaptedness might also have been better off assuming that a rustling in the bush was a tiger, even if it usually wasn't a tiger, because failing to detect actual tigers was so much more costly than erroneously "detecting" an imaginary tiger. But let this pass.)
+
+The problem is that, while any individual should always want true beliefs for _themselves_ in order to navigate the world, you might want _others_ to have false beliefs in order to trick them into _mis_-navigating the world in a way that benefits you. If I'm trying to sell you a used car, then—counterintuitively—I might not _want_ you to have accurate beliefs about the car, if that would reduce the sale price or result in no deal. If our analogues in the environment of evolutionary adaptedness regularly faced structurally similar situations, and if it's expensive to maintain two sets of beliefs (the real map for ourselves, and a fake map for our victims), we might end up with a tendency not just to be lying motherfuckers who decieve others, but also to _self_-decieve in situations where the fitness payoffs of tricking others outweighed those of being clear-sighted ourselves.
+
+That's why we're not smart enough to want a discipline of Actual Social Science. The benefits of having a collective understanding of human behavior—a _shared_ map that reflects the territory that we are—could be enormous, but beliefs about our own qualities, and those of socially-salient groups to which we belong (_e.g._, sex, race, and class) are _exactly_ those for which we face the largest incentive to decieve and self-decieve. Counterintuively, I might not _want_ you to have accurate beliefs about the value of my friendship (or the disutility of my animosity), for the same reason that I might not want you to have accurate beliefs about the value of my used car. That makes it a lot harder not just to _get the right answer for the reasons_, but also to _trust_ that your fellow so-called "scholars" are trying to get the right answer, rather than trying to sneak self-aggrandizing lies into the shared map in order to fuck you over. You can't _just_ write a friendly science book for oblivious science nerds about "things we know about some ways in which people are different from each other", because almost no one is that oblivious. To write and be understood, you have to do some sort of _positioning_ of how your work fits in to [the war](/2020/Feb/if-in-some-smothering-dreams-you-too-could-pace/) over the shared map.
+
+Murray positions _Human Diversity_ as a corrective to a "blank slate" orthodoxy that refuses to entertain any possibility of biological influences on psychological group differences. The three parts of the book are pitched not simply as "stuff we know about biologically-mediated group differences" (the oblivious-science-nerd approach that I would prefer), but as a rebuttal to "Gender Is a Social Construct", "Race Is a Social Construct", and "Class Is a Function of Privilege." At the same time, however, Murray is careful to position his work as _nonthreatening_: "there are no monsters in the closet," he writes, "no dread doors that we must fear opening." He likewise "state[s] explicitly that [he] reject[s] claims that groups of people, be they sexes or races or classes, can be ranked from superior to inferior [or] that differences among groups have any relevance to human worth or dignity."
+
+I think this strategy is sympathetic but [ultimately ineffective](http://zackmdavis.net/blog/2016/08/ineffective-deconversion-pitch/). Murray is trying to have it both ways: challenging the orthodoxy, while denying the possibility of any [unfortunate implications](https://tvtropes.org/pmwiki/pmwiki.php/Main/UnfortunateImplications) of the orthodoxy being false. It's like ... [theistic evolution](https://en.wikipedia.org/wiki/Theistic_evolution): satisfactory as long as you _don't think about it too hard_, but among those with a high [need for cognition](https://en.wikipedia.org/wiki/Need_for_cognition), who know what it's like to truly believe (as I once believed), it's not going to convince anyone who hasn't _already_ broken from the orthodoxy.
+
+Murray concludes, "Above all, nothing we learn will threaten human equality properly understood." I _strongly_ agree with the _moral sentiment_, the underlying [axiology](https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/) that makes this seem like a good and wise thing to say.
+
+And yet I have been ... [trained](https://www.lesswrong.com/posts/teaxCFgtmCQ3E9fy8/the-martial-art-of-rationality). Trained to instinctively apply my full powers of analytical rigor and skepticism [to even that which is most sacred](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). Because my true loyalty is to the axiology—[to the _process_ underlying my _current best guess_](http://zackmdavis.net/blog/2017/03/dreaming-of-political-bayescraft/) as to that which is most sacred. If that which was believed to be most sacred turns out to not be entirely coherent ... then we might have some philosophical work to do, to [_reformulate_ the sacred moral ideal in a way that's actually coherent](https://arbital.greaterwrong.com/p/rescue_utility).
+
+"Nothing we learn will threaten _X_ _properly understood_." When you elide the specific assignment _X_ := "human equality", the _form_ of this statement is kind of suspicious, right? Why "properly understood"? It would be weird to say, "Nothing we learn will threaten the homotopy groups of an _n_-sphere _properly understood_."
+
+This kind of [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) seems like the kind of thing you would only invent if you _were_ secretly worried about _X_ being threatened by new discoveries, and wanted to protect your ability to backtrack and [re-gerrymander your definition of _X_ to protect what you](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) ([think that you](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief)) currently believe.
+
+If being an oblivious science nerd isn't an option, half-measures won't suffice. I think we can do better by going meta and analyzing the _functions_ being served by the constraints on our discourse and seeking out clever self-aware strategies for satisfying those functions _without_ [lying about everything](/2017/Jan/im-sick-of-being-lied-to/). We mustn't fear opening the dread meta-door in front of whether there actually _are_ dread doors that we must fear opening.
+
+Why _is_ the blank slate doctrine so compelling, that so many feel the need to protect it at all costs? (As I once felt the need.) It's not ... if you've read this far, I assume you _will_ forgive me—it's not _scientifically_ compelling. If you were studying humans the way an alien superintelligence would, trying to _get the right answer for the right reasons_ (which can conclude _conditional_ answers: if what humans are like depends on _choices_ about what we teach our children, then there will still be a fact of the matter as to what choices lead to what outcomes), you wouldn't put a whole lot of prior probability on the hypothesis "Both sexes and all ancestry-groupings of humans have the same distribution of psychological predispositions; any observed differences in behavior are solely attributable to differences in their environments." _Why_ would that be true? We _know_ that sexual dimorphism exists. We _know_ that reproductively isolated populations evolve different traits to adapt to their environments, like [those birds with differently-shaped beaks that Darwin saw on his boat trip](https://en.wikipedia.org/wiki/Darwin%27s_finches). We could certainly _imagine_ that none of the relevant selection pressures on humans happened to touch the brain—but why? Wouldn't that be kind of a weird coincidence?
+
+If the blank slate doctrine isn't _scientifically_ compelling—it's not something you would invent while trying to build shared maps that reflect the territory—then its appeal must have something to do with some function it plays in _conflicts_ over the shared map, where no one trusts each other to be doing Actual Social Science rather than lying to fuck everyone else over.
+
+And that's where the blank slate doctrine absolutely _shines_—it's the [Schelling point](/2019/Oct/self-identity-is-a-schelling-point/#schelling-point) for preventing group conflicts! (A [_Schelling point_](https://www.lesswrong.com/posts/yJfBzcDL9fBHJfZ6P/nash-equilibria-and-schelling-points) is a choice that's salient as [a focus for mutual expectations](/2019/Dec/more-schelling/): what I think that you think that I think ... _&c._ we'll choose.) If you admit that there could differences between groups, you open up the questions of in what exact traits and of what exact magnitude, which people have an incentive to lie about to divert resources and power to their group by [establishing unfair conventions and then misrepresenting those contingent bargaining equilibria](/2020/Jan/book-review-the-origins-of-unfairness/) as some "inevitable" natural order.
+
+If you're afraid of purported answers being used as a pretext for oppression, you might hope to _make the question un-askable_. Can't oppress people on the basis of race if race _doesn't exist_! Denying the existence of sex is harder—which doesn't stop people from occasionally trying. But the taboo mostly only applies to _psychological_ trait differences, because those are a [sensitive subject](http://benjaminrosshoffman.com/judgment-punishment-and-the-information-suppression-field/)—and easier to motivatedly _see what you want to see_: whereas things like height or skin tone can be directly seen and uncontroversially measured with well-understood physical instruments (like a meterstick or digital photo pixel values), psychological assessments are _much_ more complicated and therefore hard to detach from the eye of the beholder. (If I describe Mary as "warm, compassionate, and agreeable", the words mean _something_ in the sense that they change what experiences you anticipate—if you believed my report, you would be _surprised_ if Mary were to kick your dog and make fun of your nose job—but the things that they mean are a high-level statistical signal in behavior for which we [don't have a simple measurement device](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) like a meterstick to appeal to if you and I don't trust each other's character assessments of Mary.)
+
+Notice how the "not allowing sex and race differences in psychological traits to appear on shared maps is the Schelling point for resistance to sex- and race-based oppression" actually gives us an _explanation_ for _why_ one might reasonably have a sense that there are dread doors that we must not open. Undermining the "everyone is Actually Equal" Schelling point could [catalyze a preference cascade](https://www.reddit.com/r/slatestarcodex/comments/8q8p6n/culture_war_roundup_for_june_11/e0mxwe9/)—a [slide down the slippery slope to the the next Schelling point](https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes), which might be a lot worse than the _status quo_ on the "amount of rape and genocide" metric, even if it does slightly better on "estimating heritability coefficients." The orthodoxy isn't just being dumb for no reason. In analogy, Galileo and Darwin weren't _trying_ to undermine Christianity—they had much more interesting things to think about—but religious authorities were _right_ to fear heliocentrism and evolution: if the prevailing coordination equilibrium depends on lies, then telling the truth _is_ a threat and it _is_ disloyal. And if the prevailing coordination equilibrium is basically _good_, then you can see why purported truth-tellers striking at the heart of the faith might be believed to be evil.
+
+Murray opens the parts of the book about sex and race with acknowledgements of the injustice of historical patriarchy ("When the first wave of feminism in the United States got its start [...] women were rebelling not against mere inequality, but against near-total legal subservience to men") and racial oppression ("slavery experienced by Africans in the New World went far beyond legal constraints [...] The freedom granted by emancipation in America was only marginally better in practice and the situation improved only slowly through the first half of the twentieth century"). It feels ... defensive? Coerced? It probably _is_ coerced. (To his credit, Murray is generally pretty forthcoming about how the need to write "defensively" shaped the book, as in a sidebar in the introduction that says that he's prefer to say a lot more about evopsych, but he chose to just focus on empirical findings in order to avoid the charge of telling [just-so stories](https://en.wikipedia.org/wiki/Just-so_story).)
+
+But this kind of defensive half-measure satisfies no one. From the oblivious-science-nerd perspective—the view that agrees with Murray that "everyone should calm down"—you shouldn't _need_ to genuflect to the memory of some historical injustice before you're allowed to talk about Science. But from the perspective that cares about Justice and not just Truth, an _insincere_ gesture or a strategic concession is all the more dangerous insofar as it could function as camoflage for a nefarious hidden agenda. If your work is explicitly aimed at _destroying the anti-oppression Schelling-point belief_, a few hand-wringing historical interludes and bromides about human equality having no testable implications (!!) aren't going to clear you of the suspicion that you're _doing it on purpose_—trying to destroy the anti-oppression Schelling point in order to oppress, and not because anything that can be destroyed by the truth, should be.
+
+And sufficient suspicion makes communication nearly impossible. (If you _know_ someone is lying, their words mean nothing, [not even as the opposite of the truth](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence).) As far as many of Murray's detractors are concerned, it almost doesn't matter what the text of _Human Diversity_ says, how meticulously researched of a psychology/neuroscience/genetics lit review it is. From their perspective, Murray is "hiding the ball": they're not mad about _this_ book; they're mad about specifically chapters 13 and 14 of a book Murray coauthored twenty-five years ago. (I don't think I'm claiming to be a mind-reader here; the first 20% of [_The New York Times_'s review of _Human Diversity_](https://archive.is/b4xKB) is pretty explicit and representative.)
+
+In 1994's _The Bell Curve: Intelligence and Class Structure in American Life_, Murray and coauthor Richard J. Herrnstein argued that a lot of variation in life outcomes is explained by variation in intelligence. Some people think that folk concepts of "intelligence" or being "smart" are ill-defined and therefore not a proper object of scientific study. But that hasn't stopped some psychologists from trying to construct tests purporting to measure an "intelligence quotient" (or _IQ_ for short). It turns out that if you give people a bunch of different mental tests, the results all positively correlate with each other: people who are good at one mental task, like listening to a list of numbers and repeating them backwards ("reverse digit span"), are also good at others, like knowing what words mean ("vocabulary"). There's a lot of fancy linear algebra involved, but basically, you can visualize people's test results as a hyper[ellipsoid](https://en.wikipedia.org/wiki/Ellipsoid) in some high-dimensional space where the dimensions are the different tests. (I rely on this ["configuration space"](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) visual metaphor _so much_ for _so many_ things that when I started [my secret ("secret") gender blog](/), it felt right to put it under a `.space` [TLD](https://en.wikipedia.org/wiki/Top-level_domain).) The longest axis of the hyperellipsoid corresponds to the "_g_ factor" of "general" intelligence—the choice of axis that cuts through the most variance in mental abilities.
+
+It's important not to overinterpret the _g_ factor as some unitary essence of intelligence rather than the length of a hyperellipsoid. It seems likely that [if you gave people a bunch of _physical_ tests, they would positively correlate with each other](https://www.talyarkoni.org/blog/2010/03/07/what-the-general-factor-of-intelligence-is-and-isnt-or-why-intuitive-unitarianism-is-a-lousy-guide-to-the-neurobiology-of-higher-cognitive-ability/), such that you could extract a ["general factor of athleticism"](https://isteve.blogspot.com/2007/09/g-factor-of-sports.html). (It would be really interesting if anyone's actually done this using the same methodology used to construct IQ tests!) But _athleticism_ is going to be an _very_ "coarse" construct for which [the tails come apart](https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart): for example, world champion 100-meter sprinter Usain Bolt's best time in the _800_ meters is [reportedly only around 2:10](https://www.newyorker.com/sports/sporting-scene/how-fast-would-usain-bolt-run-the-mile) [or 2:07](https://archive.is/T988h)! (For comparison, _I_ ran a 2:08.3 in high school once.)
+
+Anyway, so Murray and Herrnstein talk about this "intelligence" construct, and how it's heritable, and how it predicts income, school success, not being a criminal, _&c._, and how this has all sorts of implications for Society and inequality and class structure and stuff. [TODO: mention "Coming Apart" thesis?]
+
+This _should_ just be more social-science nerd stuff, the sort of thing that would only draw your attention if, like me, you feel bad about not being smart enough to do algebraic topology and want to console yourself by at least knowing about the Science of not being smart enough to do algebraic topology. The reason everyone _and her dog_ is still mad at Charles Murray a quarter of a century later is Chapter 13, "Ethnic Differences in Cognitive Ability", and Chapter 14, "Ethnic Inequalities in Relation to IQ". So, _apparently_, different ethnic/"racial" groups have different average scores on IQ tests. [Ashkenazi Jews do the best](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/), which is why I sometimes privately joke that the fact that I'm [only 85% Ashkenazi (according to 23andMe)](/images/ancestry_report.png) explains my low IQ. ([I got a 131](/images/wisc-iii_result.jpg) on the [WISC-III](https://en.wikipedia.org/wiki/Wechsler_Intelligence_Scale_for_Children) at age 10, but that's pretty dumb compared to some of my [robot-cult](/tag/my-robot-cult/) friends.) East Asians do a little better than Europeans/"whites". And—this is the part that no one is happy about—the difference between U.S. whites and U.S. blacks is about Cohen's _d_ ≈ 1. (If two groups differ by _d_ = 1 on some measurement that's normally distributed within each group, that means that the mean of the group with the lower average measurement is at the 16th percentile of the group with the higher average measurement, or that a uniformly-randomly selected member of the group with the higher average measurement has a probability of about 0.76 have having a higher measurement than a uniformly-randomly selected member of the group with the lower average measurement.)
+
+It's important not to overinterpret the IQ-scores-by-race results; there are a bunch of standard caveats that go here that everyone's treatment of the topic needs to include. Again, just because variance in a trait is statistically associated with variance in genes _within_ a population, does _not_ mean that differences in that trait _between_ populations are _caused_ by genes: [remember the illustrations about](#heritability-caveats) sun-deprived plants and internet-deprived red-haired children. Group differences in observed tested IQs are entirely compatible with a world in which those differences are entirely due to the environment imposed by an overtly or structurally racist society. Maybe the tests are culturally biased. Maybe people with higher socioeconomic status get more opportunities to develop their intellect, and racism impedes socio-economic mobility. And so on.
+
+The problem is, a lot of the blank-slatey environmentally-caused-differences-only hypotheses for group IQ differences start to look less compelling when you look into the details. "Maybe the tests are biased", for example, isn't an insurmountable defeater to the entire endeavor of IQ testing—it is _itself_ a falsifiable hypothesis, or can become one if you specify what you mean by "bias" in detail. One idea of what it would mean for a test to be _biased_ is if it's partially measuring something other than what it purports to be measuring: if your test measures a _combination_ of "intelligence" and "submission to the hegemonic cultural dictates of the test-maker", then individuals and groups that submit less to your cultural hegemony are going to score worse, and if you _market_ your test as unbiasedly measuring intelligence, then people who believe your marketing copy will be misled into thinking that those who don't submit are dumber than they really are. But if so, and if not all of your individual test questions are _equally_ loaded on intelligence and cultural-hegemony, then the cultural bias should _show up in the statistics_. If some questions are more "fair" and others are relatively more culture-biased, then you would expect the _order of item difficulties_ to differ by culture: the ["item characteristic curve"](/papers/baker-kim-the_item_characteristic_curve.pdf) plotting the probability of getting a biased question "right" as a function of _overall_ test score should differ by culture, with the hegemonic group finding it "easier" and others finding it "harder". Conversely, if the questions that discriminate most between differently-scoring cultural/ethnic/"racial" groups were the same as the questions that discriminate between (say) younger and older children _within_ each group, that would be the kind of statistical clue you would expect to see if the test was unbiased and the group difference was real.
+
+Hypotheses that accept IQ test results as unbiased, but attribute group differences in IQ to the environment, also make statistical predictions. Controlling for parental socioeconomic status only cuts the black–white gap by a third.
+
+[TODO: sentence about sources of variation within/between groups based on Jensen]
+
+[TODO: sentence about colorism based on https://www.mdpi.com/2624-8611/1/1/17/htm "Skin color is actually only controlled by a small number of alleles, so if you think societal discrimination on skin color causes IQ differences"]
+
+And so on.
+
+In mentioning these arguments in passing, I'm _not_ trying to provide a comprehensive lit review on the causality of group IQ differences. (That's [someone else's blog](https://humanvarieties.org/2019/12/22/the-persistence-of-cognitive-inequality-reflections-on-arthur-jensens-not-unreasonable-hypothesis-after-fifty-years/).) I'm not (that) interested in this particular topic, and [without having mastered the technical literature, my assessment would be of little value](https://www.gwern.net/Mistakes#mu). Rather, I am ... doing some context-setting for the problem I _am_ interested in, of fixing public discourse. The reason we can't have an intellectually-honest public discussion about human biodiversity is because good people want to respect the anti-oppression Schelling point and are afraid of giving ammunition to racists and sexists in the war over the shared map. "Black people are, on average, genetically less intelligent than white people" is the kind of sentence that pretty much only racists would feel _good_ about saying out loud, independently of its actual truth value. In a world where most speech is about manipulating shared maps for political advantage rather than _getting the right answer for the right reasons_, it is _rational_ to infer that anyone who entertains such hypotheses is either motivated by racial malice, or is at least complicit with it—and that rational expectation isn't easily cancelled with a _pro forma_ "But, but, civil discourse" or "But, but, the true meaning of Equality is unfalsifiable" [disclaimer](http://www.overcomingbias.com/2008/06/against-disclai.html).
+
+To speak to those who aren't _already_ oblivious science nerds—or are committed to emulating such, as it is scientifically dubious whether anyone is really that oblivious—you need to put _more effort_ into your excuse for why you're interested in these topics. Here's mine, and it's from the heart, though it's up to the reader to judge for herself how credible I am when I say this—
+
+I don't want to be complicit with hatred or oppression. I want to stay loyal to the underlying egalitarian–individualist axiology that makes the blank slate doctrine _sound like a good idea_. But I also want to understand reality, to make sense of things. I want a world that's not lying to me. Having to believe false things—or even just not being able _say_ certain true things when they would otherwise be relevant—extracts a _dire_ cost on our ability to make sense of the world, because you can't just censor a few forbidden hypotheses—[you have to censor everything that _implies_ them](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies), and everything that implies _them_: the more adept you are at making logical connections, [the more of your mind you need to excise to stay in compliance](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology).
+
+We can't talk about group differences, for fear that anyone arguing that differences exist is just trying to shore up oppression. But ... structural oppression and actual group differences can _both exist at the same time_. They're not contradicting each other! Like, the fact that men are physically stronger than women (on average, but the effect size is enormous, like _d_ ≈ 2.6 for total muscle mass) is _not unrelated_ to the persistence of patriarchy! (The ability to _credibly threaten_ to physically overpower someone, [gives the more powerful party a bargaining advantage](/2020/Jan/book-review-the-origins-of-unfairness/#threatpoints-and-bargaining), even if the threat is typically unrealized.) That doesn't mean patriarchy is good; to think so would be to commit the [naturalistic fallacy](https://plato.stanford.edu/entries/moral-non-naturalism/#NatFal) of [attempting to derive an _ought_ from an _is_](https://plato.stanford.edu/entries/hume-moral/#io). No one would say that famine and plague are good just because they, too, are subject to scientific explanation. This is pretty obvious, really? But similarly, genetically-mediated differences in cognitive repertoires between ancestral populations are probably going to be _part_ of the explanation for _why_ we see the particular forms of inequality and oppression that we do, just as a brute fact of history devoid of any particular moral significance, like how part of the explanation for why European conquest of the Americas happened earlier and went smoother for the invaders than the colonization of Africa had to do with the disease burden going the other way (Native Americans were particularly vulnerable to smallpox, but Europeans were particularly vulnerable to malaria).
+
+Again—obviously—_is_ does not imply _ought_. [TODO: explain that you should imagine yourself in the inferior group]
+
+I don't know how to build a better world, but it seems like there are quite _general_ grounds on which we should expect that it would be helpful to be able to _talk_ about social problems in the language of cause and effect, with the austere objectivity of an engineering discipline. If you want to build a bridge (that will actually stay up), you need to study the ["the careful textbooks \[that\] measure \[...\] the load, the shock, the pressure \[that\] material can bear."](http://www.kiplingsociety.co.uk/poems_strain.htm) If you want to build a just Society (that will actually stay up), you need a discipline of Actual Social Science that can publish textbooks, and to get _that_, you need the ability to _talk_ about basic facts about human existence and make simple logical and statistical inferences between them.
+
+And no one can do it! [("Well for us, if even we, even for a moment, can get free our heart, and have our lips unchained—for that which seals them hath been deep-ordained!")](https://www.poetryfoundation.org/poems/43585/the-buried-life) Individual scientists can get results in their respective narrow disciplines; Charles Murray can just _barely_ summarize the science to a semi-popular audience without coming off as _too_ overtly evil to modern egalitarian moral sensibilities. (At least, the smarter egalitarians? Or, maybe I'm just old.) But at least a couple aspects of reality are even _worse_ (with respect to naïve, non-renormalized egalitarian moral sensibilities) than the ball-hiders like Murray can admit, having already blown their entire [Overton budget](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) explaining the relevant empirical findings.
+
+Murray approvingly quotes Steven Pinker (a fellow ball-hider, though [Pinker is better at it](https://archive.is/bNo2q)): "Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group."
+
+A fine sentiment. I _emphatically_ agree with the _underlying moral intuition_ that makes "Individuals should not be judged by group membership" _sound like_ a correct moral principle—one cries out at the _monstrous injustice_ of the individual being oppressed on the basis of mere stereotypes of what other people who _look_ like them might statistically be like.
+
+But can I take this _literally_ as the _exact_ statement of a moral principle? _Technically?_—no! That's actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you _know for a fact_ that I have moral property P, then it would be monstrously unjust to treat me differently just because other people who look like me mostly don't have moral property P. But in the real world, we often—usually—don't _have_ complete information about people, [or even about ourselves](/2016/Sep/psychology-is-about-invalidating-peoples-identities/).
+
+Bayes's theorem (just [a few inferential steps away from the definition of conditional probability itself](https://en.wikipedia.org/wiki/Bayes%27_theorem#Derivation), barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is [the fundamental equation](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [that governs](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [all thought](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure). When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.
+
+What goes for seeing trees, goes the same for "treating individuals as individuals": the _process_ of getting to know someone as an individual, involves exploiting the statistical relationships between what you observe, and what you're trying to learn about. If you see someone wearing an Emacs tee-shirt, you're going to assume that they _probably_ use Emacs, and asking them about their [dot-emacs file](https://www.gnu.org/software/emacs/manual/html_node/emacs/Init-File.html) is going to seem like a better casual conversation-starter compared to the base rate of people wearing non-Emacs shirts. Not _with certainty_—maybe they just found the shirt in a thrift store and thought it looked cool—but the shirt _shifts the probabilities_ implied by your decisionmaking.
+
+The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that, as far as I can tell, there's no _philosophically principled_ reason for "probabilistic update about someone's psychology on the evidence that they're wearing an Emacs shirt" to be treated _fundamentally_ differently from "probabilistic update about someone's psychology on the evidence that she's female". These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for _getting the right answer_ and nothing else), they're the same _kind_ of question: the "correct" update to make is an _empirical_ matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where _most_ people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the "Emacs shirt → Emacs user" inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is _bad and wrong_.
+
+I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old. I am—again—still fond of the moral sentiment, and eager to renormalize it into something that makes sense. (Some egalitarian anxieties do translate perfectly well into the Bayesian setting, as I'll explain in a moment.) But the abject horror I felt at eighteen at the mere suggestion of _making generalizations_ about _people_ just—doesn't make sense. Not that it _shouldn't_ be practiced (it's not that my heart wasn't in the right place), but that it _can't_ be practiced—that the people who think they're practicing it are just confused about how their own minds work.
+
+Give people photographs of various women and men and ask them to judge how tall the people in the photos are, as [Nelson _et al._ 1990 did](/papers/nelson_et_al-everyday_base_rates_sex_stereotypes_potent_and_resilient.pdf), and people's guesses reflect both the photo-subjects' actual heights, but also (to a lesser degree) their sex. Unless you expect people to be perfect at assessing height from photographs (when they don't know how far away the cameraperson was standing, aren't ["trigonometrically omniscient"](https://plato.stanford.edu/entries/logic-epistemic/#LogiOmni), _&c._), this behavior is just _correct_: men really are taller than women on average (I've seen _d_ ≈ 1.4–1.7 depending on the source), so P(true-height|apparent-height, sex) ≠ P(height|apparent-height) because of [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean) (and women and men regress to different means). But [this all happens subconsciously](/2020/Apr/peering-through-reverent-fingers/): in the same study, when the authors tried height-matching the photographs (for every photo of a woman of a given height, there was another photo in the set of a man of the same height) _and telling_ the participants about the height-matching _and_ offering a cash reward to the best height-judge, more than half of the stereotyping effect remained. It would seem that people can't consciously readjust their learned priors in reaction to verbal instructions pertaining to an artificial context.
+
+Once you understand at a _technical_ level that probabilistic reasoning about demographic features is both epistemically justified, _and_ implicitly implemented as part of the way your brain processes information _anyway_, then a moral theory that forbids this starts to look less compelling? Of course, statistical discrimination on demographic features is only epistemically justified to exactly the extent that it helps _get the right answer_. Renormalized-egalitarians can still be properly outraged about the monstrous tragedies where I have moral property P but I _can't prove it to you_, so you instead guess _incorrectly_ that I don't just because other people who look like me mostly don't, and you don't have any better information to go on—or tragedies in which a feedback loop between predictions and social norms creates or amplifies group differences that wouldn't exist under some other social equilibrium.
+
+Nelson _et al._ also found that when the people in the photographs were pictured sitting down, then judgements of height depended much more on sex than when the photo-subjects were standing. This too makes Bayesian sense: if it's harder to tell how tall an individual is when they're sitting down, you rely more on your demographic prior. In order to reduce injustice to people who are an outlier for their group, one could argue that there's a moral imperative to seek out interventions to get more fine-grained information about individuals, so that we don't need to rely on the coarse, vague information embodied in demographic stereotypes. The _moral spirit_ of egalitarian–individualism mostly survives in our efforts to [hug the query](https://www.lesswrong.com/posts/2jp98zdLo898qExrr/hug-the-query) and get [specific information](/2017/Nov/interlude-x/) with which to discriminate amongst individuals. (And _discriminate_—[to distinguish, to make distinctions](https://en.wiktionary.org/wiki/discriminate)—is the correct word.) If you care about someone's height, it is _better_ to precisely measure it using a meterstick than to just look at them standing up, and it is better to look at them standing up than to look at them sitting down. If you care about someone's skills as potential employee, it is _better_ to give them a work-sample test that assesses the specific skills that you're interested in, than it is to rely on a general IQ test, and it's _far_ better to use an IQ test than to use mere stereotypes. If our means of measuring individuals aren't reliable or cheap enough, such that we still end up using prior information from immutable demographic categories, that's a problem of grave moral seriousness—but in light of the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing reasoning under uncertainty, it's a problem that realistically needs to be solved with _better tests_ and _better signals_, not by _pretending not to have a prior_.
+
+The other place where I think Murray is hiding the ball (even from himself) is in his discussion of the value of cognitive abilities. Murray writes—
+
+> I think at the root [of the reluctance to discuss immutable human differences] is the new upper class's conflation of intellectual ability and the professions it enables with human worth. Few admit it, of course. But the evolving zeitgeist of the new upper class has led to a misbegotten hierarchy whereby being a surgeon is _better_ in some sense of human worth than being an insurance salesman, being an executive in a high-tech firm is _better_ than being a housewife, and a neighborhood of people with advanced degrees is _better_ than a neighborhood of high-school graduates. To put it so baldly makes it obvious how senseless it is. There shouldn't be any relationship between these things and human worth.