+
+> even making a baby ML dude who's about to write a terrible paper hesitate for 10 seconds and _think of the reader's reaction_ seems like a disimprovement over status quo ante.
+https://discord.com/channels/401181628015050773/458329253595840522/1006685798227267736
+
+Also, the part where I said it amounted to giving up on intellectual honesty, and he put a check mark on it
+
+The third LW bookset is called "The Carving of Reality"? Did I have counterfactual influence on that (by making that part of the sequences more memetically salient, as opposed to the "categories are made for man" strain)?
+
+Yudkowsky on EA criticism contest
+https://forum.effectivealtruism.org/posts/HyHCkK3aDsfY95MoD/cea-ev-op-rp-should-engage-an-independent-investigator-to?commentId=kgHyydoX5jT5zKqqa
+
+Yudkowsky says "we" are not to blame for FTX, but wasn't early Alameda (the Japan bitcoint arbitrage) founded as an earn-to-give scheme, and recrutied from EA?
+
+https://twitter.com/aditya_baradwaj/status/1694355639903080691
+> [SBF] wanted to build a machine—a growing sphere of influence that could break past the walls of that little office in Berkeley and wash over the world as a force for good. Not just a company, but a monument to effective altruism.
+
+Scott November 2020: "I think we eventually ended up on the same page"
+https://www.datasecretslox.com/index.php/topic,1553.msg38799.html#msg38799
+
+SK on never making a perfectly correct point
+https://www.lesswrong.com/posts/P3FQNvnW8Cz42QBuA/dialogue-on-appeals-to-consequences#Z8haBdrGiRQcGSXye
+
+Scott on puberty blockers, dreadful: https://astralcodexten.substack.com/p/highlights-from-the-comments-on-fetishes
+
+https://jdpressman.com/2023/08/28/agi-ruin-and-the-road-to-iconoclasm.html
+
+https://www.lesswrong.com/posts/BahoNzY2pzSeM2Dtk/beware-of-stephen-j-gould
+> there comes a point in self-deception where it becomes morally indistinguishable from lying. Consistently self-serving scientific "error", in the face of repeated correction and without informing others of the criticism, blends over into scientific fraud.
+
+https://time.com/collection/time100-ai/6309037/eliezer-yudkowsky/
+> "I expected to be a tiny voice shouting into the void, and people listened instead. So I doubled down on that."
+
+-----
+
+bullet notes for Tail analogy—
+ * My friend Tailcalled is better at science than me; in the hours that I've wasted with personal, political, and philosophical writing, he's actually been running surveys and digging into statistical methodology.
+ * As a result of his surveys, Tail was convinced of the two-type taxonomy, started /r/Blanchardianism, &c.
+ * Arguing with him resulted in my backing away from pure BBL ("Useful Approximation")
+ * Later, he became disillusioned with "Blanchardians" and went to war against them. I kept telling him he _is_ a "Blanchardian", insofar as he largely agrees with the main findings (about AGP as a major cause). He corresponded with Bailey and became frustrated with Bailey's ridigity. Blanchardians market themselves as disinterest truthseekers, but a lot of what they're actually doing is providing a counternarrative to social justice.
+ * There's an analogy between Tail's antipathy for Bailey and my antipathy for Yudkowsky: I still largely agree with "the rationalists", but the way especially Yudkowsky markets himself as a uniquely sane thinker
+
+Something he said made me feel spooked that he knew something about risks of future suffering that he wouldn't talk about, but in retrospect, I don't think that's what he meant.
+
+https://twitter.com/zackmdavis/status/1435856644076830721
+> The error in "Not Man for the Categories" is not subtle! After the issue had been brought to your attention, I think you should have been able to condemn it: "Scott's wrong; you can't redefine concepts in order to make people happy; that's retarded." It really is that simple! 4/6
+
+> It can also be naive to assume that all the damage that people consistently do is unintentional. For that matter, Sam by being "lol you mad" rather than "sorry" is continuing to do that damage. I'd have bought "sorry" rather a lot better, in terms of no ulterior motives.
+https://twitter.com/ESYudkowsky/status/1706861603029909508
+
+-------
+
+On 27 September 2023, Yudkowsky told Quentin Pope, "If I was given to your sort of attackiness, I'd now compose a giant LW post about how this blatant error demonstrates that nobody should trust you about anything else either." (https://twitter.com/ESYudkowsky/status/1707142828995031415) I felt like it was an OK use of bandwidth to point out that tracking reputations is sometimes useful (https://twitter.com/zackmdavis/status/1707183146335367243). My agenda here is the same as when I wrote "... on Epistemic Conduct for Author Criticism": I don't want Big Yud using his social power to delegitimize "attacks" in general, because I have an interest in attacking him. Later, he quote-Tweeted something and said,
+
+> People need to grow up reading a lot of case studies like this in order to pick of a well-calibrated instinctive sense of what ignorant criticism typically sounds like. A derisory tone is a very strong base cue, though not an invincible one.
+
+Was he subtweeting me?? (Because I was defending criticism against tone policing, and this is saying tone is a valid cue.) If it was a subtweet, I take that as vindication that my reply was a good use of bandwidth.
+
+-----
+
+In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced arguments that he doesn't trust people to understand" is true, because ... you've said so (e.g., "without getting into any weirdness that I don't expect Earthlings to think about validly"). https://www.greaterwrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free/comment/dMHdWcxgSpcdyG4hb
+
+----
+
+(He responded to me in this interaction, which is interesting.)
+
+https://twitter.com/ESYudkowsky/status/1708587781424046242
+> Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory.
+
+https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience
+> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.
+
+27 January 2020—
+> I'm also afraid of the failure mode where I get frame-controlled by the Michael/Ben/Jessica mini-egregore (while we tell ourselves a story that we're the real rationalist coordination group and not an egregore at all). Michael says that the worldview he's articulating would be the one that would be obvious to me if I felt that I was in danger. Insofar as I trust that my friends' mini-egregore is seeing something but I don't trust the details, the obvious path forward is to try to do original seeing while leaning into fear—trusting Michael's meta level advice, but not his detailed story.
+
+Weird tribalist praise for Scott: https://www.greaterwrong.com/posts/GMCs73dCPTL8dWYGq/use-normal-predictions/comment/ez8xrquaXmmvbsYPi
+
+-------
+
+I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_.
+
+I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)).
+
+I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4.
+
+But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.)
+
+
+
+------
+
+At "Arcadia"'s 2022 [Smallpox Eradication Day](https://twitter.com/KelseyTuoc/status/1391248651167494146) party, I remember overhearing[^overhearing] Yudkowsky saying that OpenAI should have used GPT-3 to mass-promote the Moderna COVID-19 vaccine to Republicans and the Pfizer vaccine to Democrats (or vice versa), thereby harnessing the forces of tribalism in the service of public health.
+
+[^overhearing]: I claim that conversations at a party with lots of people are not protected by privacy norms; if I heard it, several other people heard it; no one had a reasonable expectation that I shouldn't blog about it.
+
+I assume this was not a serious proposal. Knowing it was a joke partially mollifies what offense I would have taken if I thought he might have been serious. But I don't think I should be completely mollified, because I think I think the joke (while a joke) reflects something about Yudkowsky's thinking when he's being serious: that he apparently doesn't think corupting Society's shared maps for utilitarian ends is inherently a suspect idea; he doesn't think truthseeking public discourse is a thing in our world, and the joke reflects the conceptual link between the idea that public discourse isn't a thing, and the idea that a public that can't reason needs to be manipulated by elites into doing good things rather than bad things.
+
+My favorite Ben Hoffman post is ["The Humility Argument for Honesty"](http://benjaminrosshoffman.com/humility-argument-honesty/). It's sometimes argued the main reason to be honest is in order to be trusted by others. (As it is written, ["[o]nce someone is known to be a liar, you might as well listen to the whistling of the wind."](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings).) Hoffman points out another reason: we should be honest because others will make better decisions if we give them the best information available, rather than worse information that we chose to present in order to manipulate their behavior. If you want your doctor to prescribe you a particular medication, you might be able to arrange that by looking up the symptoms of an appropriate ailment on WebMD, and reporting those to the doctor. But if you report your _actual_ symptoms, the doctor can combine that information with their own expertise to recommend a better treatment.
+
+If you _just_ want the public to get vaccinated, I can believe that the Pfizer/Democrats _vs._ Moderna/Republicans propaganda gambit would work. You could even do it without telling any explicit lies, by selectively citing the either the protection or side-effect statistics for each vaccine depending on whom you were talking to. One might ask: if you're not _lying_, what's the problem?
+
+The _problem_ is that manipulating people into doing what you want subject to the genre constraint of not telling any explicit lies, isn't the same thing as informing people so that they can make sensible decisions. In reality, both mRNA vaccines are very similar! It would be surprising if the one associated with my political faction happened to be good, whereas the one associated with the other faction happened to be bad. Someone who tried to convince me that Pfizer was good and Moderna was bad would be misinforming me—trying to trap me in a false reality, a world that doesn't quite make sense—with [unforseeable consequences](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) for the rest of my decisionmaking. As someone with an interest in living in a world that makes sense, I have reason to regard this as _hostile action_, even if the false reality and the true reality both recommend the isolated point decision of getting vaccinated.
+
+I'm not, overall, satisfied with the political impact of my writing on this blog. One could imagine someone who shared Yudkowsky's apparent disbelief in public reason advising me that my practice of carefully explaining at length what I believe and why, has been an ineffective strategy—that I should instead clarify to myself what policy goal I'm trying to acheive, and try to figure out some clever gambit to play trans activists and gender-critical feminists against each other in a way that advances my agenda.
+
+From my perspective, such advice would be missing the point. [I'm not trying to force though some particular policy.](/2021/Sep/i-dont-do-policy/) Rather, I think I know some things about the world, things I wish I had someone had told me earlier. So I'm trying to tell others, to help them live in a world that makes sense.
+
+-------
+
+I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's clearly labeled as such would be fine.
+
+-----
+
+Michael said that we didn't want to police Eliezer's behavior, but just note that something had seemingly changed and move on. "There are a lot of people who can be usefully informed about the change," Michael said. "Not him though."
+
+That was the part I couldn't understand, the part I couldn't accept.
+
+The man rewrote had rewritten my personality over the internet. Everything I do, I learned from him. He couldn't be so dense as to not even see the thing we'd been trying to point at. Like, even if he were ultimately to endorse his current strategy, he should do it on purpose rather than on accident!
+
+(Scott mostly saw it, and had [filed his honorable-discharge paperwork](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). Anna definitely saw it, and she was doing it on purpose.)
+
+-----
+
+https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology?commentId=Z7kyiPAfmtztueFFJ
+
+----
+
+"there just wasn't any reliable similarity between biology and AI" is an interesting contrast with the constant use of the evolution analogy despite credible challenges
+https://twitter.com/ESYudkowsky/status/1738464784931025333
+
+-----
+
+What if, in addition to physical punishments and Detect Thoughts, Cheliax also had Adjust Thoughts, a "gradient descent for the brain" spell (given a desired behavior, nudge the spell target's psychology to be more likely to emit that behavior)? Does Carissa still have a winning strategy? Assume whatever implementation details make for a good story. (Maybe Cheliax is reluctant to use Adjust Thoughts too much because Asmodeus wants authentic tyrannized humans, and the Adjust Thoughts sculpting makes them less tyrannized?)
+
+> One thing is sure, the path that leads to sanity and survival doesn't start with lies or with reasoning by Appeal to (Internal) Consequences.
+https://twitter.com/ESYudkowsky/status/1743522277835333865
+
+> To clarify a possibly misunderstood point: On my model, dath ilan's institutions only work because they're inhabited by dath ilani. Dath ilani invent good institutions even if they grow up in Earth; Governance on Earth lasts three days before Earth voters screw it up.
+https://twitter.com/ESYudkowsky/status/1565516610286211072
+
+July 2023
+https://twitter.com/pawnofcthulhu/status/1680840089285582848
+> man i feel like the orthodox viewpoint on this has moved on from "let's define trans women as women" to like
+> "arguing about metaphysics is boring; letting people chose pronouns as part of self-expression seems like a thing a free society should allow for the same reason we should let people choose clothes; for many social purposes trans women are in fact empirically women"
+
+(If your "moderately serious" plan for survival is ["AI research journals banned, people violating that ban hunted down with partial effectiveness"](https://twitter.com/ESYudkowsky/status/1739705063768232070), that might be your least-bad option as a consequentialist, but one of the things your consequentialist calculation should take into account is that you've declared war on people who want to do AI science on Earth.)
+
+public intellectual death
+https://scholars-stage.org/public-intellectuals-have-short-shelf-lives-but-why/
\ No newline at end of file