+... and that's the book review that I would _prefer_ to write. A science review of a science book, for science nerds: the kind of thing that would have no reason to draw your attention if you're not _genuinely interested_ in Mahanalobis _D_ effect sizes or adaptive introgression or Falconer's formulas, for their own sake, or (better) for the sake of [compressing the length of the message needed to encode your observations](https://en.wikipedia.org/wiki/Minimum_message_length).
+
+But that's not why you're reading this. That's not why Murray wrote the book. That's not even why _I'm_ writing this. We should hope—emphasis on the _should_—for a discipline of Actual Social Science, whose practitioners strive to report the truth, the whole truth, and nothing but the truth, with the same passionately dispassionate objectivity they might bring to the study of beetles, or algebraic topology—or that an alien superintelligence might bring to the study of humans.
+
+We do not have a discipline of Actual Social Science. Possibly because we're not smart enough to do it, but perhaps more so because we're not smart enough to _want_ to do it. No one has an incentive to lie about the homotopy groups of an _n_-sphere. (The <em>k</em><sup>th</sup> group is trivial for _k_ < _n_, and isomorphic to ℤ thereafter. _You're welcome._) If you're asking questions about homotopy groups _at all_, you almost certainly care about getting _the right answer for the right reasons_. At most, you might be biased towards believing your own conjectures in the optimistic hope of achieving eternal algebraic-topology fame and glory, like Ruth Lawrence. But nothing about algebraic topology is going to be _morally threatening_ in a way that will leave you fearing that your ideological enemies have siezed control of the publishing-houses to plant lies in the textbooks to fuck with your head, or sobbing that a malicious God created the universe as a place of evil.
+
+Okay, maybe that was a bad example; topology in general really is kind of a mindfuck. (Remind me to tell you about the long line, which is like the line of real numbers, except much longer.)
+
+In any case, as soon as we start to ask questions _about humans_—and far more so _identifiable groups_ of humans—we end up entering the domain of _politics_.
+
+We really _shouldn't_. Everyone _should_ perceive a common interest in true beliefs—maps that reflect the territory, [simple theories](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor) that [predict our observations](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)—because beliefs that make accurate predictions are _useful_ for making good decisions. That's what "beliefs" are _for_, evolutionarily speaking: my analogues in humanity's environment of evolutionary adaptedness were better off believing that (say) the berries from some bush were good to eat if and only if the berries were _actually_ good to eat. If my analogues unduly-optimistically thought the berries were good when they actually weren't, they'd get sick (and lose fitness), but if they unduly-pessimistically thought the berries were not good when they actually were, they'd miss out on valuable calories (and fitness).
+
+(Okay, this story is actually somewhat complicated by the fact that [evolution didn't "figure out" how to build brains](https://www.lesswrong.com/posts/gTNB9CQd5hnbkMxAG/protein-reinforcement-and-dna-consequentialism) that [keep track of probability and utility separately](https://plato.stanford.edu/entries/decision-theory/): my analogues in the environment of evolutionary adaptedness might also have been better off assuming that a rustling in the bush was a tiger, even if it usually wasn't a tiger, because failing to detect actual tigers was so much more costly than erroneously "detecting" an imaginary tiger. But let this pass.)
+
+The problem is that, while any individual should always want true beliefs for _themselves_ in order to navigate the world, you might want _others_ to have false beliefs in order to trick them into _mis_-navigating the world in a way that benefits you. If I'm trying to sell you a used car, then—counterintuitively—I might not _want_ you to have accurate beliefs about the car, if that will reduce the sale price or result in no deal. If our analogues in the environment of evolutionary adaptedness regularly faced structurally similar situations, and if it's expensive to maintain two sets of beliefs (the real map for ourselves, and a fake map for our victims), we might end up with a tendency not just to be lying motherfuckers who decieve others, but also to _self_-decieve in situations where the fitness payoffs of tricking others outweighed those of being clear-sighted ourselves.
+
+That's why we're not smart enough to want a discipline of Actual Social Science. The benefits of having a collective understanding of human behavior—a _shared_ map—could be enormous, but beliefs about our own qualities, and those of socially-salient groups to which we belong (_e.g._, sex, race, and class) are _exactly_ those for which we face the largest incentive to decieve and self-decieve. Counterintuively, I might not _want_ you to have accurate beliefs about the value of my friendship, for the same reason that I might not want you to have accurate beliefs about the value of my used car. That makes it a lot harder not just to _get the right answer for the reasons_, but also to _trust_ that your fellow so-called "scholars" are trying to get the right answer, rather than trying to sneak self-serving lies into the shared map in order to fuck you over.