-"Nothing we learn will threaten _X_ _properly understood_." When you elide the specific assignment _X_ := "human equality", the _form_ of this statement is kind of suspicious, right? Why "properly understood"? It would be weird to say, "Nothing we learn will threaten the homotopy groups of an _n_-sphere _properly understood_." This "properly understood" qualifier seems like the sort of thing you would only say if you _were_ subconsciously worried about _X_ being threatened by new discoveries, and
+And yet I have been ... trained. Trained to instinctively apply my full powers of analytical rigor and skepticism to even that which is most sacred. Because my true loyalty is to the axiology—to the process underlying my _current best guess_ as to that which is most sacred. If that which was believed to be most sacred turns out to not be entirely coherent ... then we might have some philosophical work to do, to [_reformulate_ the sacred moral ideal in a way that's actually coherent](https://arbital.greaterwrong.com/p/rescue_utility).
+
+"Nothing we learn will threaten _X_ _properly understood_." When you elide the specific assignment _X_ := "human equality", the _form_ of this statement is kind of suspicious, right? Why "properly understood"? It would be weird to say, "Nothing we learn will threaten the homotopy groups of an _n_-sphere _properly understood_."
+
+This kind of [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) seems like the kind of thing you would only invent if you _were_ subconsciously worried about _X_ being threatened by new discoveries, and wanted to protect your ability to backtrack and re-gerrymander your definition of _X_ to protect your existing beliefs.
+
+It gets worse. Intuitively, "The moral principle that individuals should not be judged or constrained by the average properties of their group" seems self-evident—one cries out at the _monstrous injustice_ of the individual being oppressed on the basis of mere stereotypes of what other people who _look_ like them might statistically be like.
+
+I fear my training does not permit me to take the moral principle _literally_ as stated. The problem is _technical_ in nature: something that comes up when you try to understand people on a cognitive-scientific level, the way an AI researcher would understand her creations. (Even while "treat individuals as inviduals" might be a very good _English sentence_ to tell someone if you wanted them to behave ethically and didn't expect them to understand the technical problem I'm explaining.)
+
+When you "treat individuals as individuals", you do so on the basis of evidence about that individual's traits. If you see someone wearing an Emacs tee-shirt, you'll assume they probably use Emacs, and probably make and make use of all sorts of other implicit probabilistic predictions about them, in the sense that you [anticipate](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) or dis-anticipate different behaviors from them than you would of someone who was _not_ wearing an Emacs tee-shirt, and those anticipations guide your decisions.
+
+[conditional probability "Emacs shirt" vs. "is female", no principled distinction]