-We do not have a discipline of Actual Social Science. Possibly because we're not smart enough to do it, but perhaps more so because we're not smart enough to _want_ to do it. Not one has an incentive to lie about the homotopy groups of an _n_-sphere. (The <em>k</em><sup>th</sup> group is trivial for _k_ < _n_, and isomorphic to the integers thereafter. _You're welcome._) If you're asking questions about homotopy groups _at all_, you almost certainly care about getting the _right answer for the right reasons_.
+We do not have a discipline of Actual Social Science. Possibly because we're not smart enough to do it, but perhaps more so because we're not smart enough to _want_ to do it. No one has an incentive to lie about the homotopy groups of an _n_-sphere. If you're asking questions about homotopy groups _at all_, you almost certainly care about getting _the right answer for the right reasons_. At most, you might be biased towards believing your own conjectures in the optimistic hope of achieving eternal algebraic-topology fame and glory, like Ruth Lawrence. But nothing about algebraic topology is going to be [_morally threatening_](/2019/Jan/interlude-xvi/) in a way that will leave you fearing that your ideological enemies have siezed control of the publishing-houses to plant lies in the textbooks to fuck with your head, or sobbing that a malicious God created the universe as a place of evil.
+
+Okay, maybe that was a bad example; topology in general really is the kind of mindfuck that might be the design of an adversarial agency. (Remind me to tell you about the long line, which is like the line of real numbers, except much longer.)
+
+In any case, as soon as we start to ask questions _about humans_—and far more so _identifiable groups_ of humans—we end up entering the domain of _politics_.
+
+We really _shouldn't_. Everyone _should_ perceive a common interest in true beliefs—maps that reflect the territory, [simple theories](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor) that [predict our observations](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)—because beliefs that make accurate predictions are _useful_ for making good decisions. That's what "beliefs" are _for_, evolutionarily speaking: my analogues in humanity's environment of evolutionary adaptedness were better off believing that (say) the berries from some bush were good to eat if and only if the berries were _actually_ good to eat. If my analogues unduly-optimistically thought the berries were good when they actually weren't, they'd get sick (and lose fitness), but if they unduly-pessimistically thought the berries were not good when they actually were, they'd miss out on valuable calories (and fitness).