-Bayes's theorem (just [a few inferential steps away from the definition of conditional probability itself](https://en.wikipedia.org/wiki/Bayes%27_theorem#Derivation), barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is [the fundamental equation](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [that governs](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [all thought](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure). When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.
+Bayes's theorem (just [a few inferential steps away from the definition of conditional probability itself](https://en.wikipedia.org/wiki/Bayes%27_theorem#Derivation), barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is [the fundamental equation](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [that governs](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [all thought](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure). When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree, multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.
+
+What goes for seeing trees, goes the same for "treating individuals as individuals": the _process_ of getting to know someone as an individual, involves your brain exploiting the statistical relationships between what you observe, and what you're trying to learn about. If you see someone wearing an Emacs tee-shirt, you're going to assume that they _probably_ use Emacs, and asking them about their [dot-emacs file](https://www.gnu.org/software/emacs/manual/html_node/emacs/Init-File.html) is going to seem like a better casual conversation-starter compared to the base rate of people wearing non-Emacs shirts. Not _with certainty_—maybe they just found the shirt in a thrift store and thought it looked cool—but the shirt _shifts the probabilities_ implied by your decisionmaking.
+
+The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that, as far as I can tell, there's no _philosophically principled_ reason for "probabilistic update about someone's psychology on the evidence that they're wearing an Emacs shirt" to be treated _fundamentally_ differently from "probabilistic update about someone's psychology on the evidence that she's female". These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for _getting the right answer_ and nothing else), they're the same _kind_ of question: the correct update to make is an _empirical_ matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where _most_ people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the "Emacs shirt → Emacs user" inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is _bad and wrong_.