In particular, in ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) (and its precursor in a [2004 Extropians mailing list post](https://archive.is/En6qW)), Yudkowsky explains that "changing sex" is vastly easier said than done—
-
-
-[But that was all about me—I assumed "trans" was a different thing. My first clue that I might not be living in that world came from—Eliezer Yudkowsky, with the "at least 20% of the ones with penises are actually women" thing]
-
+[TODO: But that was all about me—I assumed "trans" was a different thing. My first clue that I might not be living in that world came from—Eliezer Yudkowsky, with the "at least 20% of the ones with penises are actually women" thing]
_After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female counterpart" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_.
-
[So I ended up arguing with people about the two-type taxonomy, and I noticed that those discussions kept getting _derailed_ on some variation of "The word woman doesn't actually mean that". So I took the bait, and starting arguing against that, and then Yudkowsky comes back to the subject with his "Hill of Validity in Defense of Meaning"—and I go on a philosophy of language crusade, and Yudkowsky eventually clarifies, and _then_ he comes back _again_ in Feb. 2022 with his "simplest and best protocol"]
-[At this point, the nature of the game is very clear. Yudkowsky wants to mood-affiliate with being on the right side of history, subject to the constraint of not saying anything false. I want to actually make sense of what's actually going on in the world, because _I need the correct answer to decided whether or not to cut my dick off_. On "his turn", he comes up with some pompous proclamation that's optimized to make the "pro-trans" faction look smart and good and the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles." On my turn, I put in an absurd amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation was substantively misleading as constrated to what you would say if you were actually trying to make sense of the world.]
+At this point, the nature of the game is very clear. Yudkowsky wants to mood-affiliate with being on the right side of history (as ascertained by the current year's progressive _Zeitgeist_), subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_.
-[nearest unblocked strategy; I would prefer to have a real discussion under the assumption of good faith, but _I tried that first_. Object-level disucssion with Yudkowsky is a waste of time as long as he's going to play these games; there's nothing left for me to do but jump up a meta level and explain, to anyone who capable of hearing it, why in this case the assumption of good faith has been empirically falsified]
+On "his turn", he comes up with some pompous proclamation that's very obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles."
-[If it were _actually true_ that there was no harm from the bad faith because people know they're living in a half-Stalinist environment, then he wouldn't have tried to get away with the "20% of the ones with penises" thing]
+On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying anything definitively "false"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them.
-[All this despite the fact that all my heretical opinions are _literally_ just his opinions from the 'aughts. Seriously, you think I'm smart enough to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts when he still gave a shit about telling the truth in this domain. Does he expect us not to notice? Well, I guess it's been working out for him so far.]
+In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons.
-[Agreeing with Stalin that 2+2=4 is fine; the problem is a sustained pattern of _selectively_ bring up pro-Party points while ignoring anti-Party facts that would otherwise be relevant to the topic of interest, including stonewalling commenters who try to point out relevance; I think I'm doing better: I can point to places where I argue "the other side", because I know that sides are fake]
+Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._
-[I can win concessions, like "On the Argumentative Form", but I don't want concessions; I want to _actually get the goddamned right answer_]
+It's the same thing with Yudkowsky's political-risk minimization subject to the constraint of not saying anything he knows to be false. First he tries ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out that's not what's going on, he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, I have no reason to care. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; trying to inform you just isn't [his bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line).
-[principled trans people should be offended, too!]
+Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. That's why I _tried_ the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be allowed to notice the nearest-unblocked-strategy game. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level after which there's nothing left for me to do but jump up a meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_.
-[our beliefs about dolphins are downstream of Scott's political incentives]
+What makes all this especially galling is the fact that _all of my heretical opinions are literally just his opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing technology because the category encompasses so many high-dimensional details? Not original to me! Again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want, because there are mathematical laws governing which category boundaries compress your anticipated experiences? [_We had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts when he still gave a shit about telling the truth in this domain. Does ... does he expect us not to _notice_?
+
+[If someone wants to accuse me of bad faith, fine!—I think I'm doing better: I can point to places where I argue "the other side", because I know that sides are fake]
+
+[I can win concessions, like "On the Argumentative Form", but I don't want concessions; I want to _actually get the goddamned right answer_ (maybe move this earlier and tie it into the centrism pose, which is not the same as the map that reflects the territory?)]
-------
> the challenge is almost entirely about high integrity communication by small groups
https://twitter.com/HiFromMichaelV/status/1486044326618710018
-
-My religion of Yudkowskian rationalism does have a strong tradition of _disagreement_ being socially acceptable (as it is [written of the fifth virtue](https://www.yudkowsky.net/rational/virtues), those who wish to fail must first prevent their friends from helping them), but only "on the object-level", as we say. The act of jumping up a meta level and positing that one's interlocutors are not just wrong, but _dishonest_, tends to be censured as "uncharitable."
* it is merited to touch on the nearest-unblocked strategy history somewhere in this piece, even if I may also need to write a longer "A Hill of Validity"
* also need a short statement of what I'm fighting for (AGPs are factually not women, and a culture that insists that everyone needs to lie to protect our feelings is bad for our own intellectual development; I want the things I said in "Sexual Dimorphism" to be the standard story, rather than my weird heresy)
+* need to stress that EY isn't surrendering to progressives in the straightforward way of normal academics; instead, he's trying to put on a Pretending-to-Be-Wise above-it-all independent centrist act (while being careful not to say anything that would make progs _really_ mad)
+
* my "self-ID is a Schelling Point" and "On the Argumentative Form" show that I'm not a partisan hack (maybe also publish a brief version of )
+ * I'm only doing what _he_ taught me to do
+
4 levels of intellectual conversation https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/
> I find the "(chromosomes?)" here very amusing. I am also a Yudkowskian, Eliezer; "female human" is a cluster in thingspace :)
The ]
https://astralcodexten.substack.com/p/bounded-distrust
+
+If "rationalists" are both a social cluster, and seekers of systematically-correct-reasoning, then people trying to protect the power and prestige of the social cluster face an incentive to invent fake epistemology lessons when the correct answer is unpopular! That's fatal!!
+
+If someone fires back bad faith allegations at me and is prepared to defend them, that's actually more invigorating and productive than the Berkeley equilibrium
+
+I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
+
+[principled trans people should be offended, too!]
+
+[our beliefs about dolphins are downstream of Scott's political incentives]