+> To: Eliezer Yudkowsky <[redacted]>
+> Cc: Anna Salamon <[redacted]>
+> Date: Sunday 13 September 2020 11:02 _a.m._
+> Subject: Re: out of patience
+>
+> [... redacted] The sinful and corrupted part wasn't the _initial_ Tweets; the sinful and corrupted part is this **bullshit stonewalling** when your Twitter followers and me and Michael and Ben and Sarah and [redacted] and Jessica tried to point out the problem. I've _never_ been arguing against your private universe [... redacted]; the thing I'm arguing against in ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (and **my [unfinished draft sequel](https://github.com/zackmdavis/Category_War/blob/cefa98c3abe/unnatural_categories_are_optimized_for_deception.md)**, although that's more focused on what Scott wrote) is the **_actual text_ you _actually published_, not your private universe.**
+>
+> [... redacted] you could just **publicly clarify your position on the philosophy of language** the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?!
+>
+> You wrote:
+>
+>> [Using language in a way](https://twitter.com/ESYudkowsky/status/1067291243728650243) _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying.
+>
+>> [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to.
+>
+> The problem with "it's a policy debate about how to use language" is that it completely elides the issue that some ways of using language _perform better_ at communicating information, such that **attempts to define new words or new senses of _existing_ words should come with a justification for why the new sense is _useful for conveying information_, and that _is_ a matter of Truth.** Without such a justification, it's hard to see why you would _want_ to redefine a word _except_ to mislead people with strategic equivocation.
+>
+> It is _literally true_ that Eliezer Yudkowsky is a white supremacist (if I'm allowed to define "white supremacist" to include "someone who [once linked to the 'Race and intelligence' _Wikipedia_ page](https://www.lesswrong.com/posts/faHbrHuPziFH7Ef7p/why-are-individual-iq-differences-ok) in a context that implied that it's an empirical question").
+>
+> It is _literally true_ that 2 + 2 = 6 (if I'm allowed to define '2' as •••-many).
+>
+> You wrote:
+>
+>> [The more technology advances, the further](https://twitter.com/ESYudkowsky/status/1067490362225156096) we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts.
+>
+> That's kind of like defining Solomonoff induction, and then saying, "Having said this, we've built AGI." No, you haven't said all the facts! Configuration space is _very high-dimensional_; we don't have _access_ to the individual points. Trying to specify the individual points ("say all the facts") would be like what you wrote about in ["Empty Labels"](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)—"not just that I can vary the label, but that I can get along just fine without any label at all." Since that's not possible, we need to group points into the space together so that we can use observations from the coordinates that we _have_ observed to make probabilistic inferences about the coordinates we haven't. But there are _mathematical laws_ governing how well different groupings perform, and those laws _are_ a matter of Truth, not a mere policy debate.
+>
+> [... redacted ...]
+>
+> But if behavior at equilibrium isn't deceptive, there's just _no such thing as deception_; I wrote about this on Less Wrong in ["Maybe Lying Can't Exist?!"](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) (drawing on the academic literature about sender–reciever games). I don't think you actually want to bite that bullet?
+>
+> **In terms of information transfer, there is an isomorphism between saying "I reserve the right to lie 5% of the time about whether something is a member of category C" and adopting a new definition of C that misclassifies 5% of instances with respect to the old definition.**
+>
+> Like, I get that you're ostensibly supposed to be saving the world and you don't want randos yelling at you in your email about philosophy. But **I thought the idea was that we were going to save the world [_by means of_ doing unusually clear thinking?](https://arbital.greaterwrong.com/p/executable_philosophy)**
+>
+> [Scott wrote](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (with an irrelevant object-level example redacted): "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life." (Okay, he added a clarification after I spent Christmas yelling at him; but I think he's still substantially confused in ways that I address in my forthcoming draft post.)
+>
+> [You wrote](https://twitter.com/ESYudkowsky/status/1067198993485058048): "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning."
+>
+> I think I've argued pretty extensively this is wrong! **I'm eager to hear counterarguments if you think I'm getting the philosophy wrong.** But ... **"people live in different private universes" is _not a counterargument_.**
+>
+> **It makes sense that you don't want to get involved in gender politics. That's why I wrote "... Boundaries?" using examples about dolphins and job titles, and why my forthcoming post has examples about bleggs and artificial meat.** This shouldn't be _expensive_ to clear up?! This should take like, five minutes? (I've spent twenty-one months of my life on this.) Just one little _ex cathedra_ comment on Less Wrong or _somewhere_ (**it doesn't have to be my post, if it's too long or I don't deserve credit or whatever**; I just think the right answer needs to be public) affirming that you haven't changed your mind about 37 Ways Words Can Be Wrong? Unless you _have_ changed your mind, of course?
+>
+> I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the _real_ reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer _already_ threw you a bone with the ['there's probably more than one type of dypshoria' thing.](https://twitter.com/ESYudkowsky/status/1108277090577600512) That was already a huge political concession to you! That makes you _more_ than even; you should stop being greedy and leave Eliezer alone."
+>
+> But as [I explained in my reply](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) criticizing why I think that argument is _wrong_, the whole mindset of public-arguments-as-political-favors is _crazy_. **The fact that we're having this backroom email conversation at all (instead of just being correct about the philosophy of language on Twitter) is _corrupt_!** I don't want to strike a deal in a political negotiation; I want _shared maps that reflect the territory_. I thought that's what this "rationalist community" thing was supposed to do? Is that not a thing anymore? If we can't do the shared-maps thing when there's any hint of political context (such that now you _can't_ clarify the categories thing, even as an abstract philosophy issue about bleggs, because someone would construe that as taking a side on whether trans people are Good or Bad), that seems really bad for our collective sanity?! (Where collective sanity is potentially useful for saving the world, but is at least a quality-of-life improver if we're just doomed to die in 15 years no matter what.)
+>
+> **I really used to look up to you.** In my previous interactions with you, I've been tightly [cognitively constrained](http://www.hpmor.com/chapter/57) by hero-worship. I was already so starstruck that _Eliezer Yudkowsky knows who I am_, that the possibility that _Eliezer Yudkowsky might disapprove of me_, was too terrifying to bear. I really need to get over that, because it's bad for me, and [it's _really_ bad for you](https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why). I remain,
+>
+> Your heartbroken student,
+> Zack M. Davis
+
+[TODO: Sep 2020 categories clarification from EY—victory?!
+https://www.facebook.com/yudkowsky/posts/10158853851009228
+_ex cathedra_ statement that gender categories are not an exception to the rule, only 1 year and 8 months after asking for it
+]
+
+[TODO: "Unnatural Categories Are Optimized for Deception"
+
+Abram was right
+
+the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work).
+
+Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about
+
+somehow accuracy seems more fundamental than power or resources ... could that be formalized?
+]