+my moral compass puts me to the right of the politically vocal tech nerds, especially the more androgynous type of tech nerd
+
+(There had been a moment during my psych imprisonment the other month, when I had noticeable diffculty dialing a phone. I was still a _person_, even when not all of my usual cognitive abilities were online.)
+
+The fact that I ghosted on music lessons from "Tricky" for being nonbinary, is an example of phenotypic capture ruining everything
+
+
+https://twitter.com/ESYudkowsky/status/1668419201101615105
+
+> As usual, I got there first and solved the relatively easy philosophy problems, so the sensible people have nothing to talk about, and the unsensible ones can't just use my answer sheet.
+
+(I thought about apologizing if some of the content was "weird" or offensive, but I figured if you've been a professional editor for 15 years and list memoirs as a specialty, you've probably seen everything.)
+
+After this, the AI situation is looking worrying enough, that I'm thinking I should try to do some more direct xrisk-reduction work, although I haven't definitely selected any particular job or project. (It probably won't matter, but it will be dignified.) Now that the shape of the threat is on the horizon, I think I'm less afraid of being directly involved. Something about having large language models to study in the 'twenties is—grounding, compared to the superstitious fears of the paperclip boogeyman of my nightmares in the 'teens.
+
+Like all intellectuals, as a teenager I imagined that I would write a book. It was always going to be about gender, but I was vaguely imagining a novel, which never got beyond vague imaginings. That was before the Sequences. I'm 35 years old now. I think my intellectual life has succeeded in ways I didn't know how to imagine, before. I think my past self would be proud of this blog—140,000 words of blog posts stapled together is _morally_ a book—once he got over the shock of heresy.
+
+[TODO conclusion, cont'd—
+ * Do I have regrets about this Whole Dumb Story? A lot, surely—it's been a lot of wasted time. But it's also hard to say what I should have done differently; I could have listened to Ben more and lost faith Yudkowsky earlier, but he had earned a lot of benefit of the doubt?
+ * even young smart AGPs who can appreciate my work have still gotten pinkpilled
+ * Jonah had told me that my planning horizon was too short—like the future past a year wasn't real to me. (This plausibly also explains my impatience with college.) My horizon is starting to broaden as AI timelines shorten
+ * less drama (in my youth, I would have been proud that at least this vice was a feminine trait; now, I prefer to be good even if that means being a good man)
+]
+
+> Would you smile to see him dead? Would you say, "We are rid of this obscenist"? Fools! The corpse would laugh at you from its cold eyelids! The motionless lips would mock, and the solemn hands, the pulseless, folded hands, in their quietness would write the last indictment, which neither Time nor you can efface. Kill him! And you write his glory and your shame! Said Achmiz in his felon stripes stands far above you now, and Said Achmiz _dead_ will live on, immortal in the race he died to free! Kill him!
+>
+> —[Voltairine de Cleyre](https://praxeology.net/VC-SS.htm) (paraphrased)
+
+[TODO—early 2023 moderation drama
+ * In early 2023, I was trying to finish up this memoir, but while procrastinating on that, I ended up writing a few other posts for _Less Wrong_; I thought the story of my war with the "rationalists" was relevantly "over"; I didn't anticipate things getting any "worse"
+ * I happened to see that Duncan Sabien's "Basics of Rationalists Discourse" was published
+ * Backstory: Sabien is a former CfAR employee whose Facebook posts I had used to comment on. He had a history of getting offended over things that I didn't think were important—all the way back to our very first interaction in 2017 (I remember being in Portland using Facebook/Messenger on my phone)
+
+ ...
+
+ * I was reluctant to ping Oli (the way I pung Babcock and Pace) because I still "owed" him for the comment on "Challenges", but ultimately ended up sending a Twitter DM just after the verdict (when I saw that he had very-recent reply Tweets and was thus online); I felt a little bit worse about that one (the "FYI I'm at war"), but I think I de-escalated OK and he didn't seem to take it personally
+
+ ...
+
+ * Said is braver than me along some dimensions; the reason he's in trouble and I'm not, even though we were both fighting with Duncan, is that I was more "dovish"—when Duncan attacked, I focused on defense and withheld my "offensive" thoughts; Said's points about Duncan's blocking psychology were "offensive"
+
+ ...
+
+ * I'm proud of the keeping-my-cool performance when Duncan was mad at me, less proud of my performance fighting for Said so far
+
+ ...
+
+ * In the Ruby slapfight, I was explicit about "You shouldn't be making moderation decisions based on seniority"—this time, I've moved on to just making decisions based on seniority; if we're doing consequentialism based on how to attract people to the website, it's clear that there are no purer standards left to appeal to
+]
+
+insane religious fantatics who "merely" want heretics to know their place (as opposed to wanting to hurt or exile them) are still insane religious fanatics.
+
+A hill of validity in defense of meaning.
+
+playing a Dagny Taggart strategy: https://twitter.com/zackmdavis/status/1606718513267486721
+
+-------
+
+I actually thought, "It's so weird to have the psychological upper hand over Vasar" ... I can see a possible story where we was unnerved that I was holding my own in the email argument, so he switched venues to in-person
+
+I _do_ think about psych warfare sometimes (like with Ray)
+
+A possible counterargument to that would be that (at a minimum for many people in many contexts), emotion can't be effectively faked.
+
+I get better criticism from 93, in that he tells me that my ideas are full of shit without making it personal
+
+> No I think there's a case for this approach. Like you see the argument for why you might want to be laboriously fair to Yud because it's important that no one dismiss your complaints on the grounds of "I dunno, doesn't seem fair enough for me".
+> Whereas doing that for every random person you mention would be a lot of work.
+
+I never got an answer to why it was wrong for me to talk to Scott!! And the contradiction between that, and Ben's emphasis on privacy being unjust!
+
+It would have been a lot simpler if you could _just_ make object-level criticisms: "'yelling' is a tendentious description, from our perspective we were arguing passionately"—rather than first
+
+If someone ran over a pedestrian in their car, at the trial you would actually argue about how culpable they are (if they were drunk, it would be worse than if it could be proven to be a freak accident), and "The victim is so much worse off than you!!" isn't actually an argument relevant to determination of culpability
+
+
+When I mentioned re-reading Moldbug on "ignoble privilege", "Thomas" said that it was a reason not to feel the need to seek the approval of women, who had not been ennobled by living in an astroturfed world where the traditional (_i.e._, evolutionarily stable) strategies of relating had been relabeled as oppression. The chip-on-her-shoulder effect was amplified in androgynous women. (Unfortunately, the sort of women I particularly liked.)
+
+He advised me that if I did find an androgynous woman I was into, I shouldn't treat her as a moral authority. Doing what most sensitive men thought of as equality degenerated into female moral superiority, which wrecks the relationship in a feedback loop of testing and resentment. (Women want to win arguments in the moment, but don't actually want to lead the relationship.) Thus, a strange conclusion: to have an egalitarian heterosexual relationship, the man needs to lead the relationship _into_ equality; a dab of patriarchy works better than none.
+
+https://www.spectator.co.uk/article/should-we-fear-ai-james-w-phillips-and-eliezer-yudkowsky-in-conversation/
+
+https://twitter.com/ESYudkowsky/status/1680166329209466881
+> I am increasingly worried about what people start to believe in after they stop believing in Me
+
+https://twitter.com/ESYudkowsky/status/1683659276471115776
+> from a contradiction one may derive anything, and this is especially true of contradicting Eliezer Yudkowsky
+
+https://twitter.com/ESYudkowsky/status/1683694475644923904
+> I don't do that sort of ridiculous drama for the same reason that Truman didn't like it. I don't have that kind of need to be at the center of the story, that I'd try to make the disaster be about myself.
+
+
+
+------
+
+I'm thinking that for controversial writing, it's not enough to get your friends to pre-read, and it's not enough to hire a pro editor, you probably also need to hire a designated "hostile prereader"
+
+
+
+--------
+
+[revision comment]
+
+The fifth- through second-to-last paragraphs of the originally published version of this post were bad writing on my part.
+
+I was summarizing things Ben said at the time that felt like an important part of the story, without adequately
+
+I've rewritten that passage. Hopefully this version is clearer.
+
+---------
+
+[reply to Wei]
+
+
+----------
+
+Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. "Work for me or the world ends badly," basically. If the claim was true, it was important to make, and to actually extract that labor.
+
+But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a extremely high standard, and not a special flaw of Yudkowsky in the current environment). If, after we had _tried_ to talk to him privately, Yudkowsky couldn't be bothered to either live up to his own stated standards or withdraw his validation from the machine he built, then we had a right to talk about what we thought was going on.
+
+This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine and its operators were doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists.
+
+Ben compared the whole set-up to that of Eliza the spambot therapist in my short story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the initial intent, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't survive long-term in this ecosystem. If we wanted minds that do "naïve" inquiry (instead of playing savvy power games) to live, we needed an interior that justified that level of trust.
+
+-----
+
+I mostly kept him blocked on Twitter (except when doing research for this document) to curb the temptation to pick fights, but I unblocked him in July 2023 because it was only fair to let him namesearch my promotional Tweet of pt. 2, which named him. I then ended up replying to a thread with him and Perry Metzinger, but only because I was providing relevant information, similar to how I had left a few "Less Wrong reference desk"-style messages in Eliezerfic in 2023
+
+it got 16 Likes
+https://twitter.com/zackmdavis/status/1682100362357121025
+
+I miss this Yudkowsky—
+
+
+----
+
+I thought I should have avoided the 2022 Valinor party to avoid running into him, but I did end up treating him in a personality-cultish way when I was actually there
+
+"Ideology is not the movement" had specifically listed trans as a shibboleth
+
+https://twitter.com/RichardDawkins/status/1684947017502433281
+> Keir Starmer agrees that a woman is an adult human female. Will Ed Davey also rejoin the real world, science & the English language by reversing his view that a woman can "quite clearly" have a penis? Inability to face reality in small things bodes ill for more serious matters.
+
+Analysis of my writing mistake
+https://twitter.com/shroomwaview/status/1681742799052341249
+
+------
+
+I got my COVID-19 vaccine (the one-shot Johnson & Johnson) on 3 April 2021, so I was able to visit "Arcadia" again on 17 April, for the first time in fourteen months.
+
+I had previously dropped by in January to deliver two new board books I had made, _Koios Blume Is Preternaturally Photogenic_ and _Amelia Davis Ford and the Great Plague_, but that had been a socially-distanced book delivery, not a "visit".
+
+The copy of _Amelia Davis Ford and the Great Plague_ that I sent to my sister in Cambridge differed slightly from the one I brought to "Arcadia". There was an "Other books by the author" list on the back cover with the titles of my earlier board books. In the Cambridge edition of _Great Plague_, the previous titles were printed in full: _Merlin Blume and the Methods of Pre-Rationality_, _Merlin Blume and the Steerswoman's Oath_, _Merlin Blume and the Sibling Rivalry_. Whereas in _Preternaturally Photogenic_ and the "Arcadia" edition of _Great Plague_, the previous titles were abbreviated: _The Methods of Pre-Rationality_, _The Steerswoman's Oath_, _The Sibling Rivalry_.
+
+The visit on the seventeenth went fine. I hung out, talked, played with the kids. I had made a double-dog promise to be on my best no-politics-and-religion-at-the-dinner-table behavior.
+
+At dinner, there was a moment when Koios bit into a lemon and made a funny face, to which a bunch of the grown-ups said "Awww!" A few moments later, he went for the lemon again. Alicorn speculated that Koios had noticed that the grown-ups found it cute the first time, and the grown-ups were chastened. "Aww, baby, we love you even if you don't bite the lemon."
+
+It was very striking to me how, in the case of the baby biting a lemon, Alicorn _immediately_ formulated the hypothesis that what-the-grownups-thought-was-cute was affecting the baby's behavior, and everyone _immediately just got it_. I was tempted to say something caustic about how no one seemed to think a similar mechanism could have accounted for some of the older child's verbal behavior the previous year, but I kept silent; that was clearly outside the purview of my double-dog promise.
+
+There was another moment when Mike made a remark about how weekends are socially constructed. I had a lot of genuinely on-topic cached witty philosophy banter about [how the social construction of concepts works](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), that would have been completely innocuous if anyone _else_ had said it, but I kept silent because I wasn't sure if it was within my double-dog margin of error if _I_ said it.
+
+> even making a baby ML dude who's about to write a terrible paper hesitate for 10 seconds and _think of the reader's reaction_ seems like a disimprovement over status quo ante.
+https://discord.com/channels/401181628015050773/458329253595840522/1006685798227267736
+
+Also, the part where I said it amounted to giving up on intellectual honesty, and he put a check mark on it
+
+The third LW bookset is called "The Carving of Reality"? Did I have counterfactual influence on that (by making that part of the sequences more memetically salient, as opposed to the "categories are made for man" strain)?
+
+Yudkowsky on EA criticism contest
+https://forum.effectivealtruism.org/posts/HyHCkK3aDsfY95MoD/cea-ev-op-rp-should-engage-an-independent-investigator-to?commentId=kgHyydoX5jT5zKqqa
+
+Yudkowsky says "we" are not to blame for FTX, but wasn't early Alameda (the Japan bitcoint arbitrage) founded as an earn-to-give scheme, and recrutied from EA?
+
+https://twitter.com/aditya_baradwaj/status/1694355639903080691
+> [SBF] wanted to build a machine—a growing sphere of influence that could break past the walls of that little office in Berkeley and wash over the world as a force for good. Not just a company, but a monument to effective altruism.
+
+Scott November 2020: "I think we eventually ended up on the same page"
+https://www.datasecretslox.com/index.php/topic,1553.msg38799.html#msg38799
+
+SK on never making a perfectly correct point
+https://www.lesswrong.com/posts/P3FQNvnW8Cz42QBuA/dialogue-on-appeals-to-consequences#Z8haBdrGiRQcGSXye
+
+Scott on puberty blockers, dreadful: https://astralcodexten.substack.com/p/highlights-from-the-comments-on-fetishes
+
+https://jdpressman.com/2023/08/28/agi-ruin-and-the-road-to-iconoclasm.html
+
+https://www.lesswrong.com/posts/BahoNzY2pzSeM2Dtk/beware-of-stephen-j-gould
+> there comes a point in self-deception where it becomes morally indistinguishable from lying. Consistently self-serving scientific "error", in the face of repeated correction and without informing others of the criticism, blends over into scientific fraud.
+
+https://time.com/collection/time100-ai/6309037/eliezer-yudkowsky/
+> "I expected to be a tiny voice shouting into the void, and people listened instead. So I doubled down on that."
+
+-----
+
+bullet notes for Tail analogy—
+ * My friend Tailcalled is better at science than me; in the hours that I've wasted with personal, political, and philosophical writing, he's actually been running surveys and digging into statistical methodology.
+ * As a result of his surveys, Tail was convinced of the two-type taxonomy, started /r/Blanchardianism, &c.
+ * Arguing with him resulted in my backing away from pure BBL ("Useful Approximation")
+ * Later, he became disillusioned with "Blanchardians" and went to war against them. I kept telling him he _is_ a "Blanchardian", insofar as he largely agrees with the main findings (about AGP as a major cause). He corresponded with Bailey and became frustrated with Bailey's ridigity. Blanchardians market themselves as disinterest truthseekers, but a lot of what they're actually doing is providing a counternarrative to social justice.
+ * There's an analogy between Tail's antipathy for Bailey and my antipathy for Yudkowsky: I still largely agree with "the rationalists", but the way especially Yudkowsky markets himself as a uniquely sane thinker
+
+Something he said made me feel spooked that he knew something about risks of future suffering that he wouldn't talk about, but in retrospect, I don't think that's what he meant.
+
+https://twitter.com/zackmdavis/status/1435856644076830721
+> The error in "Not Man for the Categories" is not subtle! After the issue had been brought to your attention, I think you should have been able to condemn it: "Scott's wrong; you can't redefine concepts in order to make people happy; that's retarded." It really is that simple! 4/6
+
+> It can also be naive to assume that all the damage that people consistently do is unintentional. For that matter, Sam by being "lol you mad" rather than "sorry" is continuing to do that damage. I'd have bought "sorry" rather a lot better, in terms of no ulterior motives.
+https://twitter.com/ESYudkowsky/status/1706861603029909508
+
+-------
+
+On 27 September 2023, Yudkowsky told Quentin Pope, "If I was given to your sort of attackiness, I'd now compose a giant LW post about how this blatant error demonstrates that nobody should trust you about anything else either." (https://twitter.com/ESYudkowsky/status/1707142828995031415) I felt like it was an OK use of bandwidth to point out that tracking reputations is sometimes useful (https://twitter.com/zackmdavis/status/1707183146335367243). My agenda here is the same as when I wrote "... on Epistemic Conduct for Author Criticism": I don't want Big Yud using his social power to delegitimize "attacks" in general, because I have an interest in attacking him. Later, he quote-Tweeted something and said,
+
+> People need to grow up reading a lot of case studies like this in order to pick of a well-calibrated instinctive sense of what ignorant criticism typically sounds like. A derisory tone is a very strong base cue, though not an invincible one.
+
+Was he subtweeting me?? (Because I was defending criticism against tone policing, and this is saying tone is a valid cue.) If it was a subtweet, I take that as vindication that my reply was a good use of bandwidth.
+
+-----
+
+In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced arguments that he doesn't trust people to understand" is true, because ... you've said so (e.g., "without getting into any weirdness that I don't expect Earthlings to think about validly"). https://www.greaterwrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free/comment/dMHdWcxgSpcdyG4hb
+
+----
+
+(He responded to me in this interaction, which is interesting.)
+
+https://twitter.com/ESYudkowsky/status/1708587781424046242
+> Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory.
+
+https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience
+> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.
+
+27 January 2020—
+> I'm also afraid of the failure mode where I get frame-controlled by the Michael/Ben/Jessica mini-egregore (while we tell ourselves a story that we're the real rationalist coordination group and not an egregore at all). Michael says that the worldview he's articulating would be the one that would be obvious to me if I felt that I was in danger. Insofar as I trust that my friends' mini-egregore is seeing something but I don't trust the details, the obvious path forward is to try to do original seeing while leaning into fear—trusting Michael's meta level advice, but not his detailed story.
+
+Weird tribalist praise for Scott: https://www.greaterwrong.com/posts/GMCs73dCPTL8dWYGq/use-normal-predictions/comment/ez8xrquaXmmvbsYPi
+
+-------
+
+I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_.
+
+I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)).
+
+I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4.
+
+But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.)
+
+
+
+------
+
+At "Arcadia"'s 2022 [Smallpox Eradication Day](https://twitter.com/KelseyTuoc/status/1391248651167494146) party, I remember overhearing[^overhearing] Yudkowsky saying that OpenAI should have used GPT-3 to mass-promote the Moderna COVID-19 vaccine to Republicans and the Pfizer vaccine to Democrats (or vice versa), thereby harnessing the forces of tribalism in the service of public health.
+
+[^overhearing]: I claim that conversations at a party with lots of people are not protected by privacy norms; if I heard it, several other people heard it; no one had a reasonable expectation that I shouldn't blog about it.
+
+I assume this was not a serious proposal. Knowing it was a joke partially mollifies what offense I would have taken if I thought he might have been serious. But I don't think I should be completely mollified, because I think I think the joke (while a joke) reflects something about Yudkowsky's thinking when he's being serious: that he apparently doesn't think corupting Society's shared maps for utilitarian ends is inherently a suspect idea; he doesn't think truthseeking public discourse is a thing in our world, and the joke reflects the conceptual link between the idea that public discourse isn't a thing, and the idea that a public that can't reason needs to be manipulated by elites into doing good things rather than bad things.
+
+My favorite Ben Hoffman post is ["The Humility Argument for Honesty"](http://benjaminrosshoffman.com/humility-argument-honesty/). It's sometimes argued the main reason to be honest is in order to be trusted by others. (As it is written, ["[o]nce someone is known to be a liar, you might as well listen to the whistling of the wind."](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings).) Hoffman points out another reason: we should be honest because others will make better decisions if we give them the best information available, rather than worse information that we chose to present in order to manipulate their behavior. If you want your doctor to prescribe you a particular medication, you might be able to arrange that by looking up the symptoms of an appropriate ailment on WebMD, and reporting those to the doctor. But if you report your _actual_ symptoms, the doctor can combine that information with their own expertise to recommend a better treatment.
+
+If you _just_ want the public to get vaccinated, I can believe that the Pfizer/Democrats _vs._ Moderna/Republicans propaganda gambit would work. You could even do it without telling any explicit lies, by selectively citing the either the protection or side-effect statistics for each vaccine depending on whom you were talking to. One might ask: if you're not _lying_, what's the problem?
+
+The _problem_ is that manipulating people into doing what you want subject to the genre constraint of not telling any explicit lies, isn't the same thing as informing people so that they can make sensible decisions. In reality, both mRNA vaccines are very similar! It would be surprising if the one associated with my political faction happened to be good, whereas the one associated with the other faction happened to be bad. Someone who tried to convince me that Pfizer was good and Moderna was bad would be misinforming me—trying to trap me in a false reality, a world that doesn't quite make sense—with [unforseeable consequences](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) for the rest of my decisionmaking. As someone with an interest in living in a world that makes sense, I have reason to regard this as _hostile action_, even if the false reality and the true reality both recommend the isolated point decision of getting vaccinated.
+
+I'm not, overall, satisfied with the political impact of my writing on this blog. One could imagine someone who shared Yudkowsky's apparent disbelief in public reason advising me that my practice of carefully explaining at length what I believe and why, has been an ineffective strategy—that I should instead clarify to myself what policy goal I'm trying to acheive, and try to figure out some clever gambit to play trans activists and gender-critical feminists against each other in a way that advances my agenda.
+
+From my perspective, such advice would be missing the point. [I'm not trying to force though some particular policy.](/2021/Sep/i-dont-do-policy/) Rather, I think I know some things about the world, things I wish I had someone had told me earlier. So I'm trying to tell others, to help them live in a world that makes sense.
+
+-------
+
+I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's clearly labeled as such would be fine.
+
+-----
+
+Michael said that we didn't want to police Eliezer's behavior, but just note that something had seemingly changed and move on. "There are a lot of people who can be usefully informed about the change," Michael said. "Not him though."
+
+That was the part I couldn't understand, the part I couldn't accept.
+
+The man rewrote had rewritten my personality over the internet. Everything I do, I learned from him. He couldn't be so dense as to not even see the thing we'd been trying to point at. Like, even if he were ultimately to endorse his current strategy, he should do it on purpose rather than on accident!
+
+(Scott mostly saw it, and had [filed his honorable-discharge paperwork](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). Anna definitely saw it, and she was doing it on purpose.)
+
+-----
+
+https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology?commentId=Z7kyiPAfmtztueFFJ