A fine sentiment. I _emphatically_ agree with the _underlying moral intuition_ that makes "Individuals should not be judged by group membership" _sound like_ a correct moral principle—one cries out at the _monstrous injustice_ of the individual being oppressed on the basis of mere stereotypes of what other people who _look_ like them might statistically be like.
-But can I take this _literally_ as the _exact_ statement of a moral principle? _Technically?_—no! That's actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you _know for a fact_ that I have moral property P, then it would be monstrously injust to treat me differently just because other people who look like me mostly don't have moral property P. But in the real world, we often—usually—don't _have_ complete information about people, [or even about ourselves](/2016/Sep/psychology-is-about-invalidating-peoples-identities/).
+But can I take this _literally_ as the _exact_ statement of a moral principle? _Technically?_—no! That's actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you _know for a fact_ that I have moral property P, then it would be monstrously unjust to treat me differently just because other people who look like me mostly don't have moral property P. But in the real world, we often—usually—don't _have_ complete information about people, [or even about ourselves](/2016/Sep/psychology-is-about-invalidating-peoples-identities/).
Bayes's theorem (just [a few inferential steps away from the definition of conditional probability itself](https://en.wikipedia.org/wiki/Bayes%27_theorem#Derivation), barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is [the fundamental equation](http://yudkowsky.net/rational/bayes) [that governs](http://yudkowsky.net/rational/technical/) [all thought](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure). When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.
The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that there's no _philosophically principled_ reason for "probabilistic update about someone's psychology on the evidence that they're wearing an Emacs shirt" to be treated _fundamentally_ differently from "probabilistic update about someone's psychology on the evidence that she's female".
-These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for _getting the right answer_), they're the same _kind_ of question: the "correct" update to make is an _empirical_ matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is _bad and wrong_.
+These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for _getting the right answer_ and nothing else), they're the same _kind_ of question: the "correct" update to make is an _empirical_ matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where _most_ people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the "Emacs shirt → Emacs user" inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is _bad and wrong_.
-I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old.
+I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old. I am—again—still fond of the moral sentiment, and eager to renormalize it into something that makes sense. (Some egalitarian anxieties do translate perfectly well into the Bayesian setting, as I'll explain in a moment.) But the abject horror I felt at eighteen at the mere suggestion of _making generalizations_ about _people_ just—doesn't make sense. Not that it _shouldn't_ be practiced (it's not that my heart wasn't in the right place), but that it _can't_ be practiced—that the people who think they're practicing it are just confused about how their own minds work.
+
+Give people photographs of various women and men and ask them to judge how tall the people in the photos are, as [Nelson _et al._ 1990 did](/papers/nelson_et_al-everyday_base_rates_sex_stereotypes_potent_and_resilient.pdf), and people's guesses reflect both the photo-subjects' actual heights, but also (to a lesser degree) their sex. Unless you expect people to be perfect at assessing height from photographs (when they don't know how far away the cameraperson was standing, aren't ["trigonometrically omniscient"](https://plato.stanford.edu/entries/logic-epistemic/#LogiOmni), _&c._), this behavior is just _correct_: men really are taller than women on average (I've seen _d_ ≈ 1.4–1.7 depending on the source), so P(true-height|apparent-height, sex) ≠ P(height|apparent-height) because of [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean) (and women and men regress to different means).
+
+But this all happens subconsciously—and people _can't turn it off_. In the same study, when the authors tried height-matching the photographs (for every photo of a woman of a given height, there was another photo in the set of a man of the same height) _and telling_ the participants about the height-matching _and_ offering a cash reward to the best height-judge, more than half of the stereotyping effect remained. It would seem that people can't consciously readjust their learned priors in reaction to verbal instructions pertaining to an artificial context.
+
+The point is, once you understand at a _technical_ level that probabilistic reasoning about demographic features is both epistemically justified, _and_ implicitly implemented as part of the way your brain processes information _anyway_, then a moral theory that forbids this starts to look much less compelling. Maybe a Bayesian superintelligence could redesign the human brain to _not_ use Bayesian reasoning when contemporary egalitarians would find that ideologically disagreeable? But a world populated by such people, constitutionally incapable of reacting to statistical regularities that we, in our world, automatically take into account (without necessarily noticing that we do), would likely come off as creepy or uncanny.
+
+[TODO: elaborate on a specific uncanniness]
+
+[TODO: really need to address "But choice!" or "But not for psychology!" objections]
+
+Of course, statistical discrimination on demographic features is only epistemically justified to exactly the extent that it helps _get the right answer_. Renormalized-egalitarians can still be unhappy about the monstrous tragedies where I have moral property P but I _can't prove it to you_, so you instead guess _incorrectly_ that I don't just because other people who look like me mostly don't, and you don't have any better information to go on. Nelson _et al._ also found that when the people in the photographs were pictured sitting down, then judgements of height depended much more on sex than when the photo-subjects were standing. This also makes Bayesian sense:
+
+[the other thing the ball-hiders can't get right: actually, IQ is morally valuable]