+Unfortunately, there's still some remaining tension here insofar as the guy continues to lean on "you gotta trust me, bro; I'm from dath ilan and therefore racially superior to you" personality-cult-leader intimidation tactics, which I consider myself to have a selfish interest in showing to third parties to be unfounded.
+
+With anyone else in the world, I'm happy to let an argument drop after it's been stalemated at 20K words, because no one else in the world is making a morally fraudulent claim to be a general-purpose epistemic authority that has a shot at fooling people like me. (_E.g._, Scott Alexander is very explicit about just being a guy with a blog; Scott does not actively try to discourage people from thinking for themselves.)
+
+New example from today: a claim that MIRI is constrained by the need to hire people who make only valid arguments, and (in response to a commenter) that in his experience, finding and error-correcting invalid arguments is the same mental skill. <https://twitter.com/ESYudkowsky/status/1767276710041686076>
+
+But elsewhere, this _motherfucker_ has been completely shameless about refusing to acknowledge counterarguments that would be politically inconvenient for him to acknowledge!
+
+[]
+
+(Screenshot took place in a publicly-linked server and is therefore OK to share)
+
+My heart racing, it's tempting to leave a Twitter reply saying, "Actually, in my exhaustively documented experience, you don't give a shit about error-correcting invalid arguments when that would be politically inconvenient for you"
+
+But ... what good would that do, at this point? As I wrote in the memoir, "We've already seen from his behavior that he doesn't give a shit what people like me think of his intellectual integrity. Why would that change?"
+
+The function of getting the Whole Dumb Story written down that I was supposed to _move on_. I have _other things to do_.
+
+---------
+
+ Oli Habryka gets it! (<https://www.greaterwrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal/comment/he8dztSuBBuxNRMSY>)
+ Vaniver gets it! (<https://www.greaterwrong.com/posts/yFZH2sBsmmqgWm4Sp/if-clarity-seems-like-death-to-them/comment/dSiBGRGziEffJqN2B>)
+
+Eliezer Yudkowsky either doesn't get it, or is pretending not to get it. I almost suspect it's the first one, which is far worse
+
+https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions
+> Various people who work or worked for MIRI came up with some actually-useful notions here and there, like Jessica Taylor's expected utility quantilization.
+
+https://twitter.com/ESYudkowsky/status/1301958048911560704
+> That is: they had to impose a (new) quantitative form of "conservatism" in my terminology, producing only results similar (low KL divergence) to things already seen, in order to get human-valued output. They didn't directly optimize for the learned reward function!
+
+-----
+
+Metzger is being reasonable here
+
+https://twitter.com/perrymetzger/status/1773340617671667713
+> That's a fairly inaccurate way of putting it. It wasn't "poked with a stick", what happened was that gradient descent was used to create a function approximator that came as close as possible to matching the inputs and outputs. It's not like someone beat a conscious entity until it deceptively did what its masters demanded but it secretly wants to do something else; in fact, the process didn't even involve GPT-4 itself, it was the process that *created* the weights of GPT-4.
+