-Murray concludes, "Above all, nothing we learn will threaten human equality properly understood," and quotes Steven Pinker: "Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group."
-
-I [_strongly_ agree with](/2017/Dec/theres-a-land-that-i-see-or-the-spirit-of-intervention/) the _moral sentiment_, the underlying [axiology](https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/) this seem like a good and wise thing to say.
-
-And yet I have been ... trained. Trained to instinctively apply my full powers of analytical rigor and skepticism to even that which is most sacred. Because my true loyalty is to the axiology—to the process underlying my _current best guess_ as to that which is most sacred. If that which was believed to be most sacred turns out to not be entirely coherent ... then we might have some philosophical work to do, to [_reformulate_ the sacred moral ideal in a way that's actually coherent](https://arbital.greaterwrong.com/p/rescue_utility).
-
-"Nothing we learn will threaten _X_ _properly understood_." When you elide the specific assignment _X_ := "human equality", the _form_ of this statement is kind of suspicious, right? Why "properly understood"? It would be weird to say, "Nothing we learn will threaten the homotopy groups of an _n_-sphere _properly understood_."
-
-This kind of [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) seems like the kind of thing you would only invent if you _were_ subconsciously worried about _X_ being threatened by new discoveries, and wanted to protect your ability to backtrack and re-gerrymander your definition of _X_ to protect your existing beliefs.
-
-It gets worse. Intuitively, "The moral principle that individuals should not be judged or constrained by the average properties of their group" seems self-evident—one cries out at the _monstrous injustice_ of the individual being oppressed on the basis of mere stereotypes of what other people who _look_ like them might statistically be like.
-
-I fear my training does not permit me to take the moral principle _literally_ as stated. The problem is _technical_ in nature: something that comes up when you try to understand people on a cognitive-scientific level, the way an AI researcher would understand her creations. (Even while "treat individuals as inviduals" might be a very good _English sentence_ to tell someone if you wanted them to behave ethically and didn't expect them to understand the technical problem I'm explaining.)
-
-When you "treat individuals as individuals", you do so on the basis of evidence about that individual's traits. If you see someone wearing an Emacs tee-shirt, you'll assume they probably use Emacs, and probably make and make use of all sorts of other implicit probabilistic predictions about them, in the sense that you [anticipate](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) or dis-anticipate different behaviors from them than you would of someone who was _not_ wearing an Emacs tee-shirt, and those anticipations guide your decisions.
-
-[conditional probability "Emacs shirt" vs. "is female", no principled distinction]
+That's why we're not smart enough to want a discipline of Actual Social Science. The benefits of having a collective understanding of human behavior—a _shared_ map—could be enormous, but beliefs about our own qualities, and those of socially-salient groups to which we belong (_e.g._, sex, race, and class) are _exactly_ those for which we face the largest incentive to decieve and self-decieve. Counterintuively, I might not _want_ you to have accurate beliefs about the value of my friendship, for the same reason that I might not want you to have accurate beliefs about the value of my used car. That makes it a lot harder not just to _get the right answer for the reasons_, but also to _trust_ that your fellow so-called "scholars" are trying to get the right answer, rather than trying to sneak self-serving lies into the shared map in order to fuck you over.