+This is certainly an _improvement_ over the original text without the note, but I took the use of the national borders metaphor here to mean that Scott still hadn't really gotten my point about there being underlying laws of thought underlying categorization: mathematical principles governing _how_ definition choices can muddle communication or be deceptive. (But that wasn't surprising; [by Scott's own admission, he's not a math guy](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/).)
+
+Category "boundaries" are a useful _visual metaphor_ for explaining the cognitive function of categorization: you imagine a "boundary" in configuration space containing all the things that belong to the category.
+
+If you have the visual metaphor, but you don't have the math, you might think that there's nothing intrinsically wrong with squiggly or discontinuous category "boundaries", just as there's nothing intrinsically wrong with Alaska not being part of the contiguous U.S. states. It may be _inconvenient_ that you can't drive from Alaska to Washington without going through Canada, and we have to deal with the consequences of that, but there's no sense in which it's _wrong_ that the borders are drawn that way: Alaska really is governed by the United States.
+
+But if you _do_ have the math, a moment of introspection will convince you that the analogy between category "boundaries" and national borders is not a particularly deep or informative one.
+
+A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of what government.
+
+In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful _because_ that structure is useful for making probabilistic inferences: you can use your observastions of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.
+
+But the trick only works to the extent that the category is a regular, non-squiggly region of configuration space: if you know that egg-shaped objects tend to be blue, and you see a black-and-white photo of an egg-shaped object, you can get _close_ to picking out its color on a color wheel. But if egg-shaped objects tend to blue _or_ green _or_ red _or_ gray, you wouldn't know where to point to on the color wheel.
+
+The analogous algorithm applied to national borders on a political map would be observe the longitude of a place, use that to guess what country the place is in, and then use the country to guess the latitude—which isn't typically what people _do_ with maps. Category "boundaries" and national borders might both be _illustrated_ in a diagram as a closed region in two-dimensional space, but philosophically, they're very different entities. The fact that Scott Alexander was appealing to national borders to explain why gerrymandered categories were allegedly okay, showed that he Didn't Get It.
+
+I still had some deeper philosophical problems to resolve, though. If squiggly categories were less useful for inference, why would someone _want_ a squiggly category boundary? Someone who said, "Ah, but I assign _higher utility_ to doing it this way", had to be messing with you. Where would such a utility function come from? Intuitively, it had to be precisely _because_ squiggly boundaries were less useful for inference; the only reason you would _realistically_ want to do that would be to commit fraud, to pass off pyrite as gold by redefining the word "gold."
+
+That was my intuition. To formalize it, I wanted some sensible numerical quantity that would be maximized by using "nice" categories and get trashed by gerrymandering. [Mutual information](https://en.wikipedia.org/wiki/Mutual_information) was the obvious first guess, but that wasn't it, because mutual information lacks a "topology", a notion of _closeness_ that made some false predictions better than others by virtue of being "close".
+
+Suppose the outcome space of _X_ is `{H, T}` and the outcome space of _Y_ is `{1, 2, 3, 4, 5, 6, 7, 8}`. I _wanted_ to say that if observing _X_=`H` concentrates _Y_'s probability mass on `{1, 2, 3}`, that's _more useful_ than if it concentrates _Y_ on `{1, 5, 8}`—but that would require the numbers in Y to be _numbers_ rather than opaque labels; as far as elementary information theory was concerned, mapping eight states to three states reduced the entropy from lg 8 = 3 to lg 3 ≈ 1.58 no matter "which" three states they were.
+
+How could I make this rigorous? Did I want to be talking about the _variance_ of my features conditional on category-membership? Was "connectedness" intrinsically the what I wanted, or was connectedness only important because it cut down the number of possibilities? (There are 8!/(6!2!) = 28 ways to choose two elements from `{1..8}`, but only 7 ways to choose two contiguous elements.) I thought connectedness _was_ intrinsically important, because we didn't just want _few_ things, we wanted things that are _similar enough to make similar decisions about_.
+
+I put the question to a few friends (Subject: "rubber duck philosophy"), and Jessica said that my identification of the variance as the key quantity sounded right: it amounted to the expected squared error of someone trying to guess the values of the features given the category. It was okay that this wasn't a purely information-theoretic criterion, because for problems involving guessing a numeric quantity, bits that get you closer to the right answer were more valuable than bits that didn't.
+
+------
+
+[TODO: blowing up at a stray remark; robot cult to stop tricking me]
+
+[TODO: "out of patience" email]
+
+> To: Eliezer Yudkowsky <[redacted]>
+> Cc: Anna Salamon <[redacted]>
+> Date: Sunday 13 September 2020 2:24 _a.m._
+> Subject: out of patience
+>
+>> "I could beg you to do it in order to save me. I could beg you to do it in order to avert a national disaster. But I won't. These may not be valid reasons. There is only one reason: you must say it, because it is true."
+>> —_Atlas Shrugged_ by Ayn Rand
+>
+> Dear Eliezer (cc Anna as mediator):
+>
+> Sorry, I'm getting _really really_ impatient (maybe you saw my impulsive Tweet-replies today; and I impulsively called Anna today; and I've spent the last few hours drafting an even more impulsive hysterical-and-shouty potential _Less Wrong_ post; but now I'm impulsively deciding to email you in the hopes that I can withhold the hysterical-and-shouty post in favor of a lower-drama option of your choice): **is there _any_ way we can resolve the categories dispute _in public_?! Not** any object-level gender stuff which you don't and shouldn't care about, **_just_ the philosophy-of-language part.**
+>
+> My grievance against you is *very* simple. [You are *on the public record* claiming that](https://twitter.com/ESYudkowsky/status/1067198993485058048):
+>
+>> you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.
+>
+> I claim that this is _false_. **I think I _am_ standing in defense of truth when I insist on a word, brought explicitly into question, being used with some particular meaning, when I have an _argument_ for _why_ my preferred usage does a better job of "carving reality at the joints" and the one bringing my usage into question doesn't have such an argument. And in particular, "This word usage makes me sad" doesn't count as a relevant argument.** I [agree that words don't have intrinsic ontologically-basic meanings](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution), but precisely _because_ words don't have intrinsic ontologically-basic meanings, there's no _reason_ to challenge someone's word usage except _because_ of the hidden probabilistic inference it embodies.
+>
+> Imagine one day David Gerard of /r/SneerClub said, "Eliezer Yudkowsky is a white supremacist!" And you replied: "No, I'm not! That's a lie." And imagine E.T. Jaynes was still alive and piped up, "You are _ontologcially confused_ if you think that's a false assertion. You're not standing in defense of truth if you insist on words, such _white supremacist_, brought explicitly into question, being used with some particular meaning." Suppose you emailed Jaynes about it, and he brushed you off with, "But I didn't _say_ you were a white supremacist; I was only targeting a narrow ontology error." In this hypothetical situation, I think you might be pretty upset—perhaps upset enough to form a twenty-one month grudge against someone whom you used to idolize?
+>
+> I agree that pronouns don't have the same function as ordinary nouns. However, **in the English language as actually spoken by native speakers, I think that gender pronouns _do_ have effective "truth conditions" _as a matter of cognitive science_.** If someone said, "Come meet me and my friend at the mall; she's really cool and you'll like her", and then that friend turned out to look like me, **you would be surprised**.
+>
+> I don't see the _substantive_ difference between "You're not standing in defense of truth (...)" and "I can define a word any way I want." [...]
+>
+> [...]
+>
+> As far as your public output is concerned, it *looks like* you either changed your mind about how the philosophy of language works, or you think gender is somehow an exception. If you didn't change your mind, and you don't think gender is somehow an exception, is there some way we can _get that on the public record **somewhere**?!_
+>
+> As an example of such a "somewhere", I had asked you for a comment on my explanation, ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (with non-politically-hazardous examples about dolphins and job titles) [... redacted ...] I asked for a comment from Anna, and at first she said that she would need to "red team" it first (because of the political context), and later she said that she was having difficulty for other reasons. Okay, the clarification doesn't have to be on _my_ post. **I don't care about credit! I don't care whether or not anyone is sorry! I just need this _trivial_ thing settled in public so that I can stop being in pain and move on with my life.**
+>
+> As I mentioned in my Tweets today, I have a longer and better explanation than "... Boundaries?" mostly drafted. (It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the _exact_ answer, but we _want_ "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in [the "Technical Explanation"](https://yudkowsky.net/rational/technical/) about what Bayesian statisticians do). [... redacted]
+>
+> The *only* thing I've been trying to do for the past twenty-one months
+is make this simple thing established "rationalist" knowledge:
+>
+> (1) For all nouns _N_, you can't define _N_ any way you want, [for at least 37 reasons](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+>
+> (2) *Woman* is such a noun.
+>
+> (3) Therefore, you can't define the word *woman* any way you want.
+>
+> (Note, **this is _totally compatible_ with the claim that trans women are women, and trans men are men, and nonbinary people are nonbinary!** It's just that **you have to _argue_ for why those categorizations make sense in the context you're using the word**, rather than merely asserting it with an appeal to arbitrariness.)
+>
+> This is **literally _modus ponens_**. I don't understand how you expect people to trust you to save the world with a research community that _literally cannot perform modus ponens._
+>
+> [redacted ...] See, I thought you were playing on the chessboard of _being correct about rationality_. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and [redacted] _and Jessica_ wrote to you about this and explained the problem in _painstaking_ detail, **and you stonewalled us.** Why? **Why is this so hard?!**
+>
+> [redacted]
+>
+> No. The thing that's been driving me nuts for twenty-one months is that <strong><em><span style="color: #F00000;">I expected Eliezer Yudkowsky to tell the truth</span></strong></em>. I remain,
+>
+> Your heartbroken student,
+> Zack M. Davis