+Again—obviously—_is_ does not imply _ought_. In deference to the historicially well-justified egalitarian fear that such hypotheses will primarily be abused by bad actors to portray their own group as "superior", I find it helpful to dwell on science-fictional scenarios in which the boot of history is one's own neck. If a race of lavender humans from an alternate dimension were to come through a wormhole and invade our Earth and cruelly subjugate _your_ people, you would probably be pretty angry, and maybe join a paramilitary group aimed at overthrowing lavender supremacy and re-instanting civil rights. The possibility of a partially-biological _explanation_ for _why_ the purple bastards discovered wormhole generators when we didn't (maybe they have _d_ ≈ 1.8 on us in visuospatial skills, enabling their population to be first to "roll" a lucky genius who could discover the wormhole field equations), would not make the conquest somehow justified.
+
+I don't know how to build a better world, but it seems like there are quite _general_ grounds on which we should expect that it would be helpful to be able to _talk_ about social problems in the language of cause and effect, with the austere objectivity of an engineering discipline. If you want to build a bridge (that will actually stay up), you need to study the ["the careful textbooks \[that\] measure \[...\] the load, the shock, the pressure \[that\] material can bear."](http://www.kiplingsociety.co.uk/poems_strain.htm) If you want to build a just Society (that will actually stay up), you need a discipline of Actual Social Science that can publish textbooks, and to get _that_, you need the ability to _talk_ about basic facts about human existence and make simple logical and statistical inferences between them.
+
+And no one can do it! [("Well for us, if even we, even for a moment, can get free our heart, and have our lips unchained—for that which seals them hath been deep-ordained!")](https://www.poetryfoundation.org/poems/43585/the-buried-life) Individual scientists can get results in their respective narrow disciplines; Charles Murray can just _barely_ summarize the science to a semi-popular audience without coming off as _too_ overtly evil to modern egalitarian moral sensibilities. (At least, the smarter egalitarians? Or, maybe I'm just old.) But at least a couple aspects of reality are even _worse_ (with respect to naïve, non-renormalized egalitarian moral sensibilities) than the ball-hiders like Murray can admit, having already blown their entire [Overton budget](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) explaining the relevant empirical findings.
+
+Murray approvingly quotes Steven Pinker (a fellow ball-hider, though [Pinker is better at it](https://archive.is/bNo2q)): "Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group."
+
+A fine sentiment. I _emphatically_ agree with the _underlying moral intuition_ that makes "Individuals should not be judged by group membership" _sound like_ a correct moral principle—one cries out at the _monstrous injustice_ of the individual being oppressed on the basis of mere stereotypes of what other people who _look_ like them might statistically be like.
+
+But can I take this _literally_ as the _exact_ statement of a moral principle? _Technically?_—no! That's actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you _know for a fact_ that I have moral property P, then it would be monstrously unjust to treat me differently just because other people who look like me mostly don't have moral property P. But in the real world, we often—usually—don't _have_ complete information about people, [or even about ourselves](/2016/Sep/psychology-is-about-invalidating-peoples-identities/).
+
+Bayes's theorem (just [a few inferential steps away from the definition of conditional probability itself](https://en.wikipedia.org/wiki/Bayes%27_theorem#Derivation), barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is [the fundamental equation](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [that governs](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [all thought](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure). When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.
+
+What goes for seeing trees, goes the same for "treating individuals as individuals": the _process_ of getting to know someone as an individual, involves exploiting the statistical relationships between what you observe, and what you're trying to learn about. If you see someone wearing an Emacs tee-shirt, you're going to assume that they _probably_ use Emacs, and asking them about their [dot-emacs file](https://www.gnu.org/software/emacs/manual/html_node/emacs/Init-File.html) is going to seem like a better casual conversation-starter compared to the base rate of people wearing non-Emacs shirts. Not _with certainty_—maybe they just found the shirt in a thrift store and thought it looked cool—but the shirt _shifts the probabilities_ implied by your decisionmaking.
+
+The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that, as far as I can tell, there's no _philosophically principled_ reason for "probabilistic update about someone's psychology on the evidence that they're wearing an Emacs shirt" to be treated _fundamentally_ differently from "probabilistic update about someone's psychology on the evidence that she's female". These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for _getting the right answer_ and nothing else), they're the same _kind_ of question: the "correct" update to make is an _empirical_ matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where _most_ people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the "Emacs shirt → Emacs user" inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is _bad and wrong_.
+
+I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old. I am—again—still fond of the moral sentiment, and eager to renormalize it into something that makes sense. (Some egalitarian anxieties do translate perfectly well into the Bayesian setting, as I'll explain in a moment.) But the abject horror I felt at eighteen at the mere suggestion of _making generalizations_ about _people_ just—doesn't make sense. Not that it _shouldn't_ be practiced (it's not that my heart wasn't in the right place), but that it _can't_ be practiced—that the people who think they're practicing it are just confused about how their own minds work.
+
+Give people photographs of various women and men and ask them to judge how tall the people in the photos are, as [Nelson _et al._ 1990 did](/papers/nelson_et_al-everyday_base_rates_sex_stereotypes_potent_and_resilient.pdf), and people's guesses reflect both the photo-subjects' actual heights, but also (to a lesser degree) their sex. Unless you expect people to be perfect at assessing height from photographs (when they don't know how far away the cameraperson was standing, aren't ["trigonometrically omniscient"](https://plato.stanford.edu/entries/logic-epistemic/#LogiOmni), _&c._), this behavior is just _correct_: men really are taller than women on average (I've seen _d_ ≈ 1.4–1.7 depending on the source), so P(true-height|apparent-height, sex) ≠ P(height|apparent-height) because of [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean) (and women and men regress to different means). But [this all happens subconsciously](/2020/Apr/peering-through-reverent-fingers/): in the same study, when the authors tried height-matching the photographs (for every photo of a woman of a given height, there was another photo in the set of a man of the same height) _and telling_ the participants about the height-matching _and_ offering a cash reward to the best height-judge, more than half of the stereotyping effect remained. It would seem that people can't consciously readjust their learned priors in reaction to verbal instructions pertaining to an artificial context.
+
+Once you understand at a _technical_ level that probabilistic reasoning about demographic features is both epistemically justified, _and_ implicitly implemented as part of the way your brain processes information _anyway_, then a moral theory that forbids this starts to look less compelling? Of course, statistical discrimination on demographic features is only epistemically justified to exactly the extent that it helps _get the right answer_. Renormalized-egalitarians can still be properly outraged about the monstrous tragedies where I have moral property P but I _can't prove it to you_, so you instead guess _incorrectly_ that I don't just because other people who look like me mostly don't, and you don't have any better information to go on—or tragedies in which a feedback loop between predictions and social norms creates or amplifies group differences that wouldn't exist under some other social equilibrium.
+
+Nelson _et al._ also found that when the people in the photographs were pictured sitting down, then judgements of height depended much more on sex than when the photo-subjects were standing. This too makes Bayesian sense: if it's harder to tell how tall an individual is when they're sitting down, you rely more on your demographic prior. In order to reduce injustice to people who are an outlier for their group, one could argue that there's a moral imperative to seek out interventions to get more fine-grained information about individuals, so that we don't need to rely on the coarse, vague information embodied in demographic stereotypes. The _moral spirit_ of egalitarian–individualism mostly survives in our efforts to [hug the query](https://www.lesswrong.com/posts/2jp98zdLo898qExrr/hug-the-query) and get [specific information](/2017/Nov/interlude-x/) with which to discriminate amongst individuals. (And _discriminate_—[to distinguish, to make distinctions](https://en.wiktionary.org/wiki/discriminate)—is the correct word.) If you care about someone's height, it is _better_ to precisely measure it using a meterstick than to just look at them standing up, and it is better to look at them standing up than to look at them sitting down. If you care about someone's skills as potential employee, it is _better_ to give them a work-sample test that assesses the specific skills that you're interested in, than it is to rely on a general IQ test, and it's _far_ better to use an IQ test than to use mere stereotypes. If our means of measuring individuals aren't reliable or cheap enough, such that we still end up using prior information from immutable demographic categories, that's a problem of grave moral seriousness—but in light of the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing reasoning under uncertainty, it's a problem that realistically needs to be solved with _better tests_ and _better signals_, not by _pretending not to have a prior_. This could take the form of _finer-grained_ stereotypes. If someone says of me, "Taylor Saotome-Westlake? Oh, he's a _man_, you know what _they're_ like," I would be offended—I mean, I would if I still believed that getting offended ever helps with anything. (It _never helps_.) I'm _not_ like typical men, I _don't like_ typical men, and I don't want to be confused with them. But if someone says, "Taylor Saotome-Westlake? Oh, he's one of those IQ 130, [mid-to-low Conscientiousness and Agreeableness, high Openness](https://en.wikipedia.org/wiki/Big_Five_personality_traits), left-libertarian American Jewish atheist autogynephilic male computer programmers; you know what _they're_ like," my response is to nod and say, "Yeah, pretty much." I'm not _exactly_ like the others, but I don't mind being confused with them.
+
+The other place where I think Murray is hiding the ball (even from himself) is in his discussion of the value of cognitive abilities. Murray writes—
+
+> I think at the root [of the reluctance to discuss immutable human differences] is the new upper class's conflation of intellectual ability and the professions it enables with human worth. Few admit it, of course. But the evolving zeitgeist of the new upper class has led to a misbegotten hierarchy whereby being a surgeon is _better_ in some sense of human worth than being an insurance salesman, being an executive in a high-tech firm is _better_ than being a housewife, and a neighborhood of people with advanced degrees is _better_ than a neighborhood of high-school graduates. To put it so baldly makes it obvious how senseless it is. There shouldn't be any relationship between these things and human worth.
+
+I take strong issue with Murray's specific examples here—as an [incredibly bitter](http://zackmdavis.net/blog/2012/12/a-philosophy-of-education/) autodidact, I care not at all for formal school degrees, and as my fellow nobody pseudonymous blogger [Harold Lee points out](https://write.as/harold-lee/seizing-the-means-of-home-production), the domestic- and community-focused life of a housewife actually has a lot of desirable properties that many of those stuck in the technology rat race aspire to escape into. But after quibbling with the specific illustrations, I think I'm just going to bite the bullet here?
+
+_Yes_, intellectual ability _is_ a component of human worth! Maybe that's putting it baldly, but I think the _alternative_ is obviously senseless. The fact that I have the ability and motivation to (for example, among many other things I do) write this cool science–philosophy blog about my delusional paraphilia where I do things like summarize and critique the new Charles Murray book, is a big part of _what makes my life valuable_—both to me, and to the people who interact with me. If I were to catch COVID-19 next month and lose 40 IQ points due to oxygen-deprivation-induced brain damage and not be able to write blog posts like this one anymore, that would be _extremely terrible_ for me—it would make my life less-worth-living. And my friends who love me, love me not as an irreplaceably-unique-but-otherwise-featureless atom of person-ness, but _because_ my specific array of cognitive repetoires makes me a specific person who provides a specific kind of company. There can't be such a thing as _literally_ unconditional love, because to love _someone in particular_, implicitly imposes a condition: you're only committed to love those configurations of matter that constitute an implementation of your beloved, rather than someone or something else.
+
+Murray continues—
+
+> The conflation of intellectual ability with human worth helps to explain the new upper class's insistence that inequalities of intellectual ability must be the product of environmental disadvantage. Many people with high IQs really do feel sorry for people with low IQs. If the environment is to blame, then those unfortunates can be helped, and that makes people who want to help them feel good. If genes are to blame, it makes people who want to help them feel bad. People prefer feeling good to feeling bad, so they engage in confirmation bias when it comes to the evidence about the causes of human differences.
+
+I agree with Murray that this kind of psychology explains a lot of the resistance to hereditarian explanations. But as long as we're accusing people of motivated reasoning, I think Murray's solution is engaging in a similar kind of denial, but just putting it in a different place. The idea that people are unequal in ways that matter is [legitimately too horrifying to contemplate](https://www.lesswrong.com/posts/faHbrHuPziFH7Ef7p/why-are-individual-iq-differences-ok), so liberals [deny the inequality](/2017/Dec/theres-a-land-that-i-see-or-the-spirit-of-intervention/), and conservatives deny [that it matters](https://www.lesswrong.com/posts/NG4XQEL5PTyguDMff/but-it-doesn-t-matter).
+
+
+https://www.lesswrong.com/posts/Aud7CL7uhz55KL8jG/transhumanism-as-simplified-humanism