+**If you want to contest any purported factual inaccuracies or my interpretation of the "no direct references to private conversations" privacy norm, get back to me before [date].** If you have anything else to say for yourself, you can say it in the public comments section.
+
+(Or if you had something to say privately, I would _listen_; it just doesn't seem like a good use of time. I think it's undignified that I have reason to publish a post titled "Why I Don't Trust Eliezer Yudkowsky's Intellectual Honesty", but you seem very committed to not meeting my standards of intellectual honesty, so I have an interest in telling everyone else that.)
+
+
+[13 November: #drama discussion today (https://discord.com/channels/401181628015050773/458419017602826260/1041586188961714259) makes me feel like I don't want to put a "Why I Don't Trust ..." post, like it would be too cruel]
+
+[no awareness that people like me or Michael or Jessica would consider this a betrayal coming from the author of the Sequences (even if it wouldn't be a betrayal coming from a generic public intellectual)]
+
+----
+
+(If you weren't interested in meeting my standards for intellectual honest before, it's not clear why you would change your mind just because I spent 80,000 words cussing you out to everyone else.)
+
+----
+
+https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research-more-people-should-focus-on
+
+I hate that my religion is bottlenecked on one guy
+
+https://twitter.com/zackmdavis/status/1405032189708816385
+> Egregore psychology is much easier and more knowable than individual human psychology, for the same reason macroscopic matter is more predictable than individual particles. But trying to tell people what the egregore is doing doesn't work because they don't believe in egregores!!
+
+20 June 2021, "The egregore doesn't care about the past", thematic moments at Valinor
+
+You don't want to have a reputation that isn't true; I've screwed up confidentiality before, so I don't want a "good at keeping secrets" reputation; if Yudkowsky doesn't want to live up to the standard of "not being a partisan hack", then ...
+
+Extended analogy between "Scott Alexander is always right" and "Trying to trick me into cutting my dick off"—in neither case would any sane person take it literally, but it's pointing at something important (Scott and EY are trusted intellectual authorities, rats are shameless about transition cheerleading)
+
+Scott's other 2014 work doesn't make the same mistake
+
+"The Most Important Scarce Resource is Legitimacy"
+https://vitalik.ca/general/2021/03/23/legitimacy.html
+
+"English is fragile"
+
+https://twitter.com/ESYudkowsky/status/1590460598869168128
+> I now see even foom-y equations as missing the point.
+(!)
+
+----
+
+dialogue with a pre-reader on "Challenges"—
+
+> More to the point, there's a kind of anthropic futility in these paragraphs, anyone who needs to read them to understand won't read them, so they shouldn't exist.
+
+I think I'm trying to humiliate the people who are pretending not to understand in front of the people who do understand, and I think that humiliation benefits from proof-of-work? Is that ... not true??
+
+> No, because it makes you look clueless rather than them look clueless.
+
+-----
+
+compare EY and SBF
+
+Scott Aaronson on the Times's hit piece of Scott Alexander—
+https://scottaaronson.blog/?p=5310
+> The trouble with the NYT piece is not that it makes any false statements, but just that it constantly _insinuates_ nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it _does_ present.
+
+https://graymirror.substack.com/p/the-journalist-rationalist-showdown
+
+https://twitter.com/jstn/status/1591088015941963776
+> 2023 is going to be the most 2005 it's been in years
+
+-------
+
+re the FTX debacle, Yudkowsky retweets Katja:
+
+https://twitter.com/KatjaGrace/status/1590974800318861313
+> So I'd advocate for instead taking really seriously when someone seems to be saying that they think it's worth setting aside integrity etc for some greater good
+
+I'm tempted to leave a message in #drama asking if people are ready to generalize this to Kolmogorov complicity (people _very explicitly_ setting aside integrity &c. for the greater good of not being unpopular with progressives). It's so appropriate!! But it doesn't seem like a good use of my diplomacy budget relative to finishing the memoir—the intelligent social web is predictably going to round it off to "Zack redirecting everything into being about his hobbyhorse again, ignore". For the same reason, I was right to hold back my snarky comment about Yudkowsky's appeal to integrity in "Death With Dignity": the universal response would have been, "read the room." Commenting here would be bad along the same dimension, albeit not as extreme.
+
+------
+
+effects on my social life—calculating what I'm allowed to say; making sure I contribute non-hobbyhorse value to offset my hobbyhorse interruptions
+
+----
+
+https://forum.effectivealtruism.org/posts/FKJ8yiF3KjFhAuivt/impco-don-t-injure-yourself-by-returning-ftxff-money-for
+when that happens, in EA, I often do suspect that nobody else will dare to speak the contrary viewpoint, if not me.
+
+Michael, June 2019
+> If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship
+
+------
+
+Piper and Yudkowsky on privacy norms—
+
+https://twitter.com/KelseyTuoc/status/1591996891734376449
+> if such promises were made, they should be kept, but in practice in the present day, they often aren't made, and if you haven't explicitly promised a source confidentiality and then learn of something deeply unethical from them you should absolutely whistleblow.
+
+https://twitter.com/ESYudkowsky/status/1592002777429180416
+> I don't think I'd go for "haven't explicitly promised" here but rather "if you're pretty sure there was no such informal understanding on which basis you were granted that access and information".
+
+------
+
+14 November conversation, he put a checkmark emoji on my explanation of why giving up on persuading people via methods that discriminate true or false amounts to giving up on the concept of intellectual honesty and choosing instead to become a propaganda AI, which made me feel much less ragey https://discord.com/channels/401181628015050773/458419017602826260/1041836374556426350
+
+The problem isn't just the smugness and condescension; it's the smugness and condescension when he's in the wrong and betraying the principles he laid out in the Sequences and knows it; I don't want to be lumped in with anti-arrogance that's not sensitive to whether the arrogance is in the right
+
+My obsession must look as pathetic from the outside as Scott Aaronson's—why doesn't he laugh it off, who cares what SneerClub thinks?—but in my case, the difference is that I was betrayed
+
+-----
+
+dath ilan on advertising (https://www.glowfic.com/replies/1589520#reply-1589520)—
+> So, in practice, an ad might look like a picture of the product, with a brief description of what the product does better that tries to sound very factual and quantitative so it doesn't set off suspicions. Plus a much more glowing quote from a Very Serious Person who's high enough up to have a famous reputation for impartiality, where the Very Serious Person either got paid a small amount for their time to try that product, or donated some time that a nonprofit auctioned for much larger amounts; and the Very Serious Person ended up actually impressed with the product, and willing to stake some of their reputation on recommending it in the name of the social surplus they expect to be thereby produced.
+
+
+I wrote a Python script to replace links to _Slate Star Codex_ with archive links: http://unremediatedgender.space/source?p=Ultimately_Untrue_Thought.git;a=commitdiff;h=21731ba6f1191f1e8f9#patch23
+
+John Wentworth—
+> I chose the "train a shoulder advisor" framing specifically to keep my/Eliezer's models separate from the participants' own models.
+https://www.greaterwrong.com/posts/Afdohjyt6gESu4ANf/most-people-start-with-the-same-few-bad-ideas#comment-zL728sQssPtXM3QD9
+
+https://twitter.com/ESYudkowsky/status/1355712437006204932
+> A "Physics-ist" is trying to engage in a more special human activity, hopefully productively, where they *think* about light in order to use it better.
+
+Wentworth on my confusion about going with the sqaured-error criterion in "Unnatural Categories"—
+> I think you were on the right track with mutual information. They key insight here is not an insight about what metric to use, it's an insight about the structure of the world and our information about the world. [...] If we care more about the rough wall-height than about brick-parity, that’s because the rough wall-height is more relevant to the other things which we care about in the world. And that, in turn, is because the rough wall-height is more relevant to more things in general. Information about brick-parity just doesn’t propagate very far in the causal graph of the world; it's quickly wiped out by noise in other variables. Rough wall-height propagates further.
+
+not interested in litigating "lying" vs. "rationalizing" vs. "misleading-by-implicature"; you can be _culpable_ for causing people to be misled in a way that isn't that sensitive to what exactly was going on in your head
+
+-----
+
+https://www.facebook.com/yudkowsky/posts/pfbid02ZoAPjap94KgiDg4CNi1GhhhZeQs3TeTc312SMvoCrNep4smg41S3G874saF2ZRSQl?comment_id=10159410429909228&reply_comment_id=10159410748194228
+
+> Zack, and many others, I think you have a vulnerability where you care way too much about the reasons that bullies give for bullying you, and the bullies detect that and exploit it.
+
+
+
+> Everyone. (Including organizers of science fiction conventions.) Has a problem of "We need to figure out how to exclude evil bullies." We also have an inevitable Kolmogorov Option issue but that should not be confused with the inevitable Evil Bullies issue, even if bullies attack through Kolmogorov Option issues.
+
+----
+
+Someone else's Dumb Story that you can read about on someone else's blog
+
+all he does these days is sneer about Earth people, but he _is_ from Earth—carrying on the memetic legacy of Richard Feynmann and Douglas Hofstadter and Greg Egan
+
+"Robust Cooperation in the Prisoner's Dilemma" https://arxiv.org/abs/1401.5577
+
+https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business
+> One of those interests is the human pursuit of truth, which has strengthened slowly over the generations (for there was not always Science). I wish to strengthen that pursuit further, in _this_ generation. That is a wish of mine, for the Future. For we are all of us players upon that vast gameboard, whether we accept the responsibility or not.
+
+https://www.washingtonexaminer.com/weekly-standard/be-afraid-9802
+
+https://www.lesswrong.com/posts/TQSb4wd6v5C3p6HX2/the-pascal-s-wager-fallacy-fallacy#pART2rjzcmqATAZio
+> egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now.
+(I remembered this as suggesting some plausibility in sudden Singularity even then, but in context it's more clearly in thought-experimental mode)
+
+from "Go Forth and Create the Art"—
+> To the best of my knowledge there is _no_ true science that draws its strength from only one person. To the best of my knowledge that is _strictly_ an idiom of cults. A true science may have its heroes, it may even have its lonely defiant heroes, but _it will have more than one_.
+
+contrast Sequences-era "Study Science, Not Just Me" with dath ilan sneering at Earth
+
+I have no objection to the conspiracies in Brennan's world! Because Brennan's world was just "here's a fictional world with a different social structure" (Competitive Conspiracy, Cooperative Conspiracy, &c.); sure, there was a post about how Eld Science failed, but that didn't seem like _trash talk_ in the same way
+
+contrast the sneering at Earth people with the attitude in "Whining-Based Communities"
+
+from "Why Quantum?" (https://www.lesswrong.com/posts/gDL9NDEXPxYpDf4vz/why-quantum)
+> But would you believe that I had such strong support, if I had not shown it to you in full detail? Ponder this well. For I may have other strong opinions. And it may seem to you that _you_ do't see any good reason to form such strong beliefs. Except this is _not_ what you will see; you will see simply that there _is_ no good reason for the strong belief, that there _is_ no strong support one way or the other. For our first-order beliefs are how the world seems to _be_. And you may think, "Oh, Eliezer is just opinionated—forming strong beliefs in the absence of lopsided support." And I will not have time to do another couple of months worth of blog posts.
+>
+> I am _very_ far from infallible, but I do not hold strong opinions at random.
+
+Another free speech exchange with S.K. in 2020: https://www.lesswrong.com/posts/YE4md9rNtpjbLGk22/open-communication-in-the-days-of-malicious-online-actors?commentId=QoYGQS52HaTpeF9HB
+
+https://www.lesswrong.com/posts/hAfmMTiaSjEY8PxXC/say-it-loud
+
+Maybe lying is "worse" than rationalizing, but if you can't hold people culpable for rationalization, you end up with a world that's bad for broadly the same reasons that a world full of liars is bad: we can't steer the world to good states if everyone's map is full of falsehoods that locally benefitted someone
+
+http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/
+
+------
+
+https://discord.com/channels/936151692041400361/1022006828718104617/1047796598488440843
+
+I'm still pretty annoyed by how easily people are falling for this _ludicrous_ "Ah, it would be bad if people _on Earth_ tried to do this, but it's OK _in dath ilan_ because of how sane, cooperative, and kind they are" excuse.
+
+Exception Handling is depicted as _explicitly_ having a Fake Conspiracy section (<https://glowfic.com/replies/1860952#reply-1860952>). Why is that any more okay, than if FTX or Enron explicitly had a Fake Accounting department?
+
+Isn't dath ilan just very straightforwardly being _more_ corrupt than Earth here? (Because FTX and Enron were _subverting_ our usual governance and oversight mechanisms, as contrasted to the usual governance mechanisms in dath ilan _explicitly_ being set up to deceive the public.)
+
+I understand that you can _assert by authorial fiat_ that, "it's okay; no one is 'really' being deceived, because 'everybody knows' that the evidence for Sparashki being real is too implausible", and you can _assert by authorial fiat_ that it's necessary to save their world from AGI and mad science.
+
+But someone writing a story about "Effective Altruism" (instead of "Exception Handling") on "Earth" (instead of "dath ilan") could just as easily _assert by authorial fiat_, "it's okay, no one is 'really' being defrauded, because 'everybody knows' that crypto is a speculative investment in which you shouldn't invest anything you can't afford to lose".
+
+What's the difference? Are there _generalizable reasons_ why fraud isn't worth it (not in expectation, and not in reality), or is it just that Sam and Caroline weren't sane, cooperative, and kind enough to pull it off successfully?
+
+What is "It would be OK in dath ilan, but it's not OK on Earth" even supposed to _mean_, if it's not just, "It's OK for people who genetically resemble Eliezer Yudkowsky to deceive the world as long as they have a clever story for why it's all for the greater good, but it's not OK for you, because you're genetically inferior to him"?
+
+https://discord.com/channels/936151692041400361/1022006828718104617/1047374488645402684
+
+A. J. Vermillion seems to be complaining that by not uncritically taking the author assertions at face value, I'm breaking the rules of the literary-criticism game—that if the narrator _says_ Civilization was designed to be trustworthy, I have no license to doubt that is "actually" is.
+
+And I can't help but be reminded of a great short story that I remember reading back in—a long time ago
+
+I think it must have been 'aught-nine?
+
+yeah, it had to have been _late_ in 'aught-nine, because I remember discussing it with some friends when I was living in a group house on Benton street in Santa Clara
+
+anyway, there was this story about a guy who gets transported to a fantasy world where he has a magic axe that yells at him sometimes and he's prophecied to defeat the bad guy and choose between Darkness and Light, and they have to defeat these ogres to reach the bad guy's lair
+
+and when they get there, the bad guy (spoilers) ||_accuses them of murder_ for killing the ogres on the way there!!||
+
+and the moral was—or at least, the simpler message I extracted from it was—there's something messed-up about the genre convention of fantasy stories where readers just naïvely accept the author's frame, instead of looking at the portrayed world with fresh eyes and applying their _own_ reason and their _own_ morality to it—
+
+That if it's wrong to murder people with a different racial ancestry from you _on Earth_, it's _also_ wrong when you're in a fantasy kingdom setting and the race in question are ogres.
+
+And that if it's wrong to kill people and take their stuff _on Earth_, it's _also_ wrong when you're in a period piece about pirates on the high seas.
+
+And (I submit) if it's wrong to decieve the world by censoring scientific information about human sexuality _on Earth_, it's _also_ wrong when you're in a social-science-fiction setting about a world called dath ilan.
+
+(You can _assert by authorial fiat_ that Keltham doesn't mind and is actually grateful, but you could also _assert by authorial fiat_ that the ogres were evil and deserved to die.)
+
+but merely human memory fades over 13 years and merely human language is such a lossy medium; I'm telling you about the story _I_ remember, and the moral lessons _I_ learned from it, which may be very different what was actually written, or what the author was trying to teach
+
+maybe I should make a post on /r/tipofmytongue/, to ask them—
+
+_What was the name of that story?_
+
+_What was the name of that author?_
+
+(What was the name of the _antagonist_ of that story?—actually, sorry, that's a weird and random question; I don't know why my brain generated that one.)
+
+but somehow I have a premonition that I'm not going to like the answer, if I was hoping for more work from the same author in the same spirit
+
+that the author who wrote "Darkness and Light" (or whatever the story was called) died years ago
+
+or has shifted in her emphases in ways I don't like
+
+------
+
+"the absolute gall of that motherfucker"
+https://www.lesswrong.com/posts/8KRqc9oGSLry2qS9e/what-motte-and-baileys-are-rationalists-most-likely-to?commentId=qFHHzAXnGuMjqybEx
+
+In a discussion on the Eliezerfic Discord server, I've been arguing that the fact that dath ilan tries to prevent obligate sexual sadists from discovering this fact about themselves (because the unattainable rarity of corresponding masochists would make them sad) contradicts the claim that dath ilan's art of rationality is uniformly superior to that of Earth's: I think that readers of _Overcoming Bias_ in 2008 had a concept of it being virtuous to face comfortable truths, and therefore would have overwhelmingly rejected utilitarian rationales for censoring scientific information about human sexuality.
+
+------
+
+https://archive.vn/hlaRG
+
+> Bankman-Fried has been going around on a weird media tour whose essential message is "I made mistakes and was careless, sorry," presumably thinking that that is a _defense_ to fraud charges, that "carelessness" and "fraud" are entirely separate categories [...] If you attract customers and investors by saying that you have good risk management, and then you lose their money, and then you say "oh sorry we had bad risk management," that is not a defense against fraud charges! That is a confession!
+
+https://twitter.com/ESYudkowsky/status/1602215046074884097
+> If you could be satisfied by mortal men, you would be satisfied with mortal reasoning and mortal society, and would not have gravitated toward the distant orbits of my own presence.
+
+it's just so weird that, this cult that started out with "People can stand what is true, because they are already doing so", has progressed to, "But what if a prediction market says they can't??"
+
+"The eleventh virtue is scholarship! Study many sciences and absorb their power as your own ... unless a prediction market says that would make you less happy" just doesn't have the same ring to it, you know?
+"The first virtue is curiosity! A burning itch to know is higher than a solemn vow to pursue truth. But higher than both of those, is trusting your Society's institutions to tell you which kinds of knowledge will make you happy" also does not have the same ring to it, even if you stipulate by authorial fiat that your Society's institutions are super-competent, such that they're probably right about the happiness thing
+
+------
+
+so, I admitted to being a motivated critic (having a social incentive to find fault with dath ilan), but that I nevertheless only meant to report real faults rather than fake faults, and Yudkowsky pointed out that that's not good enough (you also need to be looking for evidence on the other side), and that therefore didn't consider the criticism to be coming from a peer, and challenged me to say things about how the text valorizes truth
+(and I didn't point out that whether or not I'm a "peer"—which I'm clearly not if you're measuring IQ or AI alignment contributions or fiction-writing ability—shouldn't be relevant because https://www.lesswrong.com/posts/5yFRd3cjLpm3Nd6Di/argument-screens-off-authority, because I was eager to be tested and eager to pass the test)
+and I thought I wrote up some OK answers to the query
+(I definitely didn't say, "that's impossible because Big Yud and Linta are lying liars who hate truth")
+but he still wasn't satisifed on grounds of me focusing too much on what the characters said, and not what the universe said, and then when I offered one of those, he still wasn't satisfied (because the characters had already remarked on it)
+and I got the sense that he wanted Original Seeing, and I thought, and I came up with some Original Philosophy that connected the universe of godagreements to some of my communication theory ideas, and I was excited about it
+so I ran with it
+
+[...]
+
+Zack M. Davis — Today at 10:18 PM
+but the Original Philosophy that I was legitimately proud of, wasn't what I was being tested on; it legitimately looks bad in context
+
+--------
+
+OK, so I'm thinking my main takeaways from the Eliezerfic fight is that I need to edit my memoir draft to be both _less_ "aggressive" in terms of expressing anger (which looks bad and _far more importantly_ introduces distortions), and _more_ "aggressive" in terms of calling Yudkowsky intellectually dishonest (while being _extremely clear_ about explaining _exactly_ what standards I think are not being met and why that's important, without being angry about it).
+
+The standard I'm appealing to is, "It's intellectually dishonest to make arguments and refuse to engage with counterarguments on political grounds." I think he's made himself very clear that he doesn't give a sh—
+
+(no, need to avoid angry language)
+
+—that he doesn't consider himself bound by that standard.