+helping Norton live in the real world
+
+Scott says, "It seems to me pretty obvious that the mental health benefits to trans people are enough to tip the object-level first-pass uilitarian calculus."; I don't think _anything_ about "mental health benefits to trans people" is obvious
+]
+
+[TODO: connecting with Aurora 8 December, maybe not important]
+
+What do think submitting to social pressure looks like, if it's not exactly this thing (carefully choosing your public statements to make sure no one confuses you with the Designated Ideological Bad Guy)?!? The credible threat of being labeled an Ideological Bad Guy is _the mechanism_ the "Good" Guys use to retard potentially-ideologically-inconvenient areas of inquiry.
+
+Kerry Vaughan on deferral
+https://twitter.com/KerryLVaughan/status/1552308109535858689
+
+It's not that females and males are exactly the same except males are 10% stronger on average (in which case, you might just shrug and accept unequal outcomes, the way we shrug and accept it that some athletes have better genes). Different traits have different relevance to different sports: women do better in ultraswimming _because_ that competition is sampling a
+
+where body fat is an advantage.
+
+It really is an apples-to-oranges comparison, rather than "two populations of apples with different mean weight".
+
+For example, the _function_ of sex-segrated bathrooms is to _protect females from males_, where "females" and "males" are natural clusters in configuration space that it makes sense to want words to refer to.
+
+all I actually want out of a post-Singularity utopia is the year 2007 except that I personally have shapeshifting powers
+
+The McGongall turning into a cat parody may actually be worth fitting in—McCongall turning into a cat broke Harry's entire worldview. Similarly, the "pretend to turn into a cat, and everyone just buys it" maneuver broke my religion
+
+ * https://everythingtosaveit.how/case-study-cfar/#attempting-to-erase-the-agency-of-everyone-who-agrees-with-our-position
+
+Michael on EA suppressing credible criticism https://twitter.com/HiFromMichaelV/status/1559534045914177538
+
+"epistemic hero"
+https://twitter.com/ESYudkowsky/status/1096769579362115584
+
+zinger from 93—
+> who present "this empirical claim is inconsistent with the basic tenets of my philosophy" as an argument against the _claim_
+
+reply to my flipping out at Jeff Ladish
+https://twitter.com/ESYudkowsky/status/1356493440041684993
+
+We don't believe in privacy
+> Privacy-related social norms are optimized for obscuring behavior that could be punished if widely known [...] an example of a paradoxical norm that is opposed to enforcement of norms-in-general").
+https://unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/
+
+Sucking up the the Blue Egregore would make sense if you _knew_ that was the critical resource
+https://www.lesswrong.com/posts/mmHctwkKjpvaQdC3c/what-should-you-change-in-response-to-an-emergency-and-ai
+
+I don't think I can use Ben's "Eliza the spambot therapist" analogy because it relies on the "running out the clock" behavior, and I'm Glomarizing
+
+This should be common sense, though
+https://forum.effectivealtruism.org/posts/3szWd8HwWccJb9z5L/the-ea-community-might-be-neglecting-the-value-of
+
+she thought "I'm trans" was an explanation, but then found a better theory that explains the same data—that's what "rationalism" should be—including "That wasn't entirely true!!!!"
+https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole
+
+sorrow at putting on a bad performance with respect to the discourse norms of the people I'm trying to rescue/convert; I think my hostile shorthand (saying that censorship costs nothing implies some combination "speech isn't useful" and "other people aren't real" is pointing at real patterns, but people who aren't already on my side are not going to be sympathetic)
+
+
+
+
+https://twitter.com/ESYudkowsky/status/1067300728572600320
+> You could argue that a wise policy is that we should all be called by terms and pronouns we don't like, now and then, and that to do otherwise is coddling. You could argue that Twitter shouldn't try to enforce courtesy. You could accuse, that's not what Twitter is really doing.
+
+https://twitter.com/ESYudkowsky/status/1067302082481274880
+> But Twitter is at least not *ontologically confused* if they say that using preferred pronouns is courtesy, and claim that they're enforcing a courtesy standard. Replying "That's a lie! I will never lie!" is confused. It'd be sad if the #IDW died on that hill of all hills.
+
+> Acts aren't sentences, pronouns aren't lies, bathrooms aren't fundamental physical constants, and if you know what a motte-and-bailey is you're supposed to know that.
+https://twitter.com/ESYudkowsky/status/1067287459589906432
+
+> I don't care whose point it is on this planet, the point I'm making would stand in any galaxy: You are not standing in noble defense of Truth when you ask who gets to use which bathroom. This is true across all possible worlds, including those with no sociologists in them.
+https://twitter.com/ESYudkowsky/status/1067187363544059905
+
+------
+
+https://twitter.com/davidxu90/status/1436007025545125896
+
+
+
+
+·
+Sep 9, 2021
+Crux: "If you say that Stalin is a dictator, you'll be shot, therefore Stalin is not a dictator" has the same structure as "If you say that trans women are male, they'll take massive psych damage, therefore trans women are not male"; both arguments should get the same response.
+Zack M. Davis
+@zackmdavis
+·
+Sep 9, 2021
+Thoughts on your proposed cruxes: 1 (harmful inferences) is an unworkable AI design: you need correct beliefs first, in order to correctly tell which beliefs are harmful. 4 (non-public concepts) is unworkable for humans: how do you think about things you're not allowed words for?
+
+
+[SECTION about monastaries (with Ben and Anna in April 2019)
+I complained to Anna: "Getting the right answer in public on topic _X_ would be too expensive, so we won't do it" is _less damaging_ when the set of such <em>X</em>es is _small_. It looked to me like we added a new forbidden topic in the last ten years, without rolling back any of the old ones.
+
+"Reasoning in public is too expensive; reasoning in private is good enough" is _less damaging_ when there's some sort of _recruiting pipeline_ from the public into the monasteries: lure young smart people in with entertaining writing and shiny math, _then_ gradually undo their political brainwashing once they've already joined your cult. (It had [worked on me](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/)!)
+
+I would be sympathetic to "rationalist" leaders like Anna or Yudkowsky playing that strategy if there were some sort of indication that they had _thought_, at all, about the pipeline problem—or even an indication that there _was_ an intact monastery somewhere.
+]
+
+> Admitting something when being pushed a little, but never thinking it spontaneously and hence having those thoughts absent from your own thought processes, remains not sane.
+https://twitter.com/ESYudkowsky/status/1501218503990431745
+
+> a gradual shift away from STEM nerd norms to fandom geek norms [...] the pathological insistence that you're not allowed to notice bad faith
+https://extropian.net/notice/A7rwtky5x3vPAedXZw
+
+https://www.lesswrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8
+"When I have no reason left to do anything, I am someone who tells the truth."
+
+https://glowfic.com/posts/6132?page=83
+> A tradeable medium-sized negative utility is not the same as Her really giving a shit.
+
+further post timeline—
+"Schelling Categories" Aug 2019
+"Maybe Lying Doesn't Exist" Oct 2019
+"Algorithms of Deception!" Oct 2019
+"Firming Up ..." Dec 2019
+
+"Darkest Timeline" June 2020
+"Maybe Lying Can't Exist?!" Aug 2020
+"Unnatural Categories" Jan 2021
+"Differential Signal Costs" Mar 2021
+
+
+"Public Heretic" on "Where to Draw the Boundary?"—
+> But reality, in its full buzzing and blooming confusion, contains an infinite numbers of 'joints' along which it could be carved. It is not at all clear how we could say that focusing one some of those joints is "true" while focusing on other joints is "false," since all such choices are based on similarly arbitrary conventions.
+
+> Now, it is certainly true that certain modes of categorization (i.e. the selection of certain joints) have allowed us to make empirical generalizations that would not otherwise have been possible, whereas other modes of categorization have not yielded any substantial predictive power. But why does that mean that one categorization is "wrong" or "untrue"? Better would seem to be to say that the categorization is "unproductive" in a particular empirical domain.
+
+> Let me make my claim more clear (and thus probably easier to attack): categories do not have truth values. They can be neither true nor false. I would challenge Eliezer to give an example of a categorization which is false in and of itself (rather than simply a categorization which someone then used improperly to make a silly empirical inference).
+
+Yudkowsky's reply—
+> PH, my reply is contained in Mutual Information, and Density in Thingspace.
+
+
+https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2
+> I would have been surprised if she was. Joscelin Verreuil also strikes me as being a projection of some facets of a man that a woman most notices, and not a man as we exist from the inside.
+>
+> I have never known a man with a true female side, and I have never known a woman with a true male side, either as authors or in real life.
+
+https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/AEZaakdcqySmKMJYj
+> Could you please [taboo](Could you please taboo these?) these?
+
+https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF
+> Okay. I’ve never seen a male author write a female character with the same depth as Phedre no Delaunay, nor have I seen any male person display a feminine personality with the same sort of depth and internal integrity, nor have I seen any male person convincingly give the appearance of having thought out the nature of feminity to that depth. Likewise and in a mirror for women and men. I sometimes wish that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime’s work to integrate, as the corresponding fact of feminity. I am skeptical that either sex can ever really model and predict the other’s deep internal life, short of computer-assisted telepathy. These are different brain designs we’re talking about here.
+
+https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b
+> I sometimes wish that certain men would appreciate that not all men are like them—or at least, that not all men _want_ to be like them—that the fact of masculinity is not necessarily something to integrate.
+
+> Duly appreciated.
+
+
+
+
+https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY#comments 110 karma
+support from Oli—
+> I think there is a question of whether current LessWrong is the right place for this discussion (there are topics that will attract unwanted attention, and when faced with substantial adversarial forces, I think it is OK for LessWrong to decide to avoid those topics as long as they don't seem of crucial importance for the future of humanity, or have those discussions in more obscure ways, or to limit visibility to just some subset of logged-in users, etc). But leaving that discussion aside, basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to.
+>
+> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues".
+>
+> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me.
+
+https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=cPunK8nFKuQRorcNG#comments
+iceman—
+> I kinda disagree that this is a mere issue of Straussian reading: I suspect that in this (and other cases), you are seeing the raw output of Elizer's rationalizations and not some sort of instrumental coalition politics dark arts. If I was going for some sort of Straussian play, I wouldn't bring it up unprompted or make long public declarations like this.
+>
+> Zack is hypersensitive to this one issue because it interacts with his Something to Protect. But what I wonder about is where else Eliezer is trying to get away with things like this.
+
+
+https://www.glowfic.com/replies/1853001#reply-1853001
+> Another reason people go to Hell? Malediction! An Asmodean priest was using that spell on children too! Pharasma apparently doesn't give a shit! At best, it might be a negative weight in Her utility function that She traded to the ancient gods of Evil for something else that She wanted. A tradeable medium-sized negative utility is not the same as Her _really giving a shit_.
+
+
+I furthermore claim that the following disjunction is true:
+
+> Either the quoted excerpt is a blatant lie on Scott's part because there are rules of rationality governing conceptual boundaries and Scott absolutely knows it, or
+> You have no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way.
+
+there needs to be _some_ way for _someone_ to invest a _finite_ amount of effort to _correct the mistake_
+
+https://twitter.com/ESYudkowsky/status/1404698587175350275
+> That Zack now imagines this to be a great trend [...] does seem like an avoidable error and a failure to take perspective on how much most people's lives are not about ourselves