Michael—common knowledge achieved
19 Apr: Zack, that's not what a war looks like. That was ... trying to educate people who were trying not to be educated, by arguing in good faith.
me: Ben, I mean, you're right, but if my "Category War" metaphorical name is bad, does that also mean we have to give up on the "guided by the beauty of our weapons" metaphorical catchphrase? (Good-faith arguments aren't weapons!)
+19 Apr: Ben and Jack "sadism and tactics" chat transcript
20 Apr: my closing thoughts
20 Apr: Michael on Anna as an enemy
30 Apr: me—I don't know how to tell the story without (as I perceive it) escalating personal conflicts or leaking info from private conversations.
2 Jul: "Everyone Knows", pinging Anna about it
4 Jul: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ published
11 Jul: AI timelines scam
+11 Jul: me—I'm sympathetic to tone-policing on "scam"
+Ben—What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?
+me—but the law does use mens rea, like the distinction between manslaughter and murder
+me—call with Michael, the Pope surely knows that he doesn't really have a direct line to God (despite holding office based on this pretense), to how GiveWell must know that they're not really making decisions based on numbers (despite holding credibility based on this premise)
+17 Jul: Alyssa's false claim about my pseudonyms, HRT effects side note
+18 Jul: my accusation of mis-citing Ozy was wrong
+20 Jul: me to Anna and Steven about LW mods colluding to protect feelings
+23 Jul: "epistemic defense" meeting
+24 Jul: Michael Vassar and the theory of optimal gossip
+
+
Um. I can imagine that you might find my attitude here frustrating for reasons analogous to how I find it frustrating when people reach for the fake precision of talking about hormone levels and genitalia as separate dimensions in a context where I think the concept they actually want is "biological sex." But that's just the objection I come up with when I run my "How could a hypothetical adversary argue that I'm being hypocritical?" heuristic; it doesn't actually make me feel more eager to talk about criminality myself.
+Ben—
+> I don't think it's worth your time to read that thread - the result was Zack decided to do all the public interpretive labor himself, just asked Eliezer for an endorsement of a pretty straightforward but insightful extension of A Human's Guide To Words, and then we have the recent thread. [...] The coordination capital we are building seems way more important than persuading Eliezer.
+
+Jack—
+> And the more this sort of thing continues the more embarrassed I am to be associated
+> Ptting this level of effort in looks pitiful
+> Like actually just pathetic
+> Sharing thread with the 3 most promising MIRI employees and offering to chat with them about it seems like not a horrible idea
+> But sharing it with the whole cult just seems like poking a nest of dog shit - nothing in this thread was sufficiently obviously bad by the standards of either cynicism or naive pattern matching, so making a fuss rather than writing off the troll looks either stupid (can't see obvious troll) or weak (have nothing better to do than kick a pile of excrement)
+> Oh or weak like Sarah read it
+> Someone needs to slap Zack. Not literally, probably.
+> But I already don't have much hope for the level of cultist cuck that Nick is. Zack seems like he might be further in that direction, but otoh he at least has narratized internal conflict about it.
+
+> Zack sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys (which I think he was correct to do conditional on email happening at all). We're (accurately) narratized as a political coalition who people feel obligated to talk to but to not take too seriously.
+
+
I was _just_ trying to publicly settle a very straightforward philosophy thing that seemed really solid to me
if, in the process, I accidentally ended up being an unusually useful pawn in Michael Vassar's deranged four-dimensional hyperchess political scheming
I wish I hadn't done so much yelling and crying with Ray (it's undignified, and it makes me stupid), but when people seem to be pretty explicitly denying the possibility of actually reasoning with them, it's as if I don't perceive any alternative but to express myself in a language they might have a better hope of understanding
Also, I think it's pretty ironic that this happened on a post that was explicitly about causal reality vs. social reality! It's entirely possible that I wouldn't feel inclined to be such a hardass about "Whether I respect you is off-topic" if it weren't for that prompt!
+
+
+I think mens rea as a concept is pretty clearly necessary for good incentives, at least some of the time. The law distinguishes between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), and this is an important distinction, because traffic accidents are significantly less disincentivizable ("elastic"?) than offing one's enemies. (Anyone who drives is going to have analogues in nearby possible worlds who are guilty of vehicular manslaughter.) In the absence of mindreading tech, it may not be easy for the court to determine intent, but it should be possible in at least some cases: if the victim is a stranger and the weapon was a car, then it's probably manslaughter; if the victim was recently revealed to have slept with the perp's wife and the weapon was a gun purchased the previous day, then it's probably murder.
+
+I would be pretty surprised if any of you disagree with this, at least in the case of homicide? The case of misinformation that moves resources is a lot trickier, to be sure (for one extremely important disanalogy, perps don't gain from manslaughter), but I don't see what would reverse the extremely weak principle that "intent matters, somewhat, sometimes"?
+
+
+1. I didn't stay on them for long enough for the full/interesting effects. (I chickened out at 5 months because of health/fertility conservatism; might try again sometime.)
+
+2. I in particular am unusually bad at introspection?? (I think I mentioned this in the posts: there could very well have been lots of changes that I just didn't consciously/verbally register as "change due to HRT".) Possible evidence: I drink coffee often, but I don't feel like I can consciously notice its effects most of the time, except when I'm very sleepy, feel not-sleepy some time after drinking coffee, and infer, "Oh, I'm not sleepy any more; it must have been the coffee, which I know is a stimulant because everyone says so and I believe them." But the fact that I feel motivated to seek out coffee even though it doesn't "objectively" taste particularly good suggests that the drug is doing something to my brain even though I wouldn't be very good at blogging about it.
+
+3. People-in-general's verbal self-reports are heavily influenced by priors/selective-attention: if you expect HRT (or coffee) to drastically change your experience, you'll find things to notice; if you don't, then you won't. The actual effects of biochemistry are presumably real and in-principle-measuable, but the translation from "brain biochemistry" to "blog post" destroys almost all non-culturally-preconceived information.
+
+
+ My previous understanding of the present struggle (filtered through the lens of my idiosyncratic Something to Protect) seemed like a simple matter of Bay Area political correctness; I didn't really feel like I understood what Michael/Ben/Jessica were seeing as a more general dire problem. But now we have two Less Wrong moderators (Ruby and Ray, not rando n00bs) who seem pretty actively opposed to honest discourse (on the grounds that we need to solve the coordination problem of colluding to protect each other's feelings before we can feel safe enough to have arguments). Michael's cockamamie social theories about there being a socio-psychological attractor optimizing for fraud-in-general are actually starting to seem plausible??
+
+Steven, if you have time and are interested, could you read the secret ("secret") thread and my earlier meta-scuffle with Ruby and a just-now thread with Ray tell me if you're seeing what I'm seeing?
+
+Anna, I continue to be pretty baffled at how (from my perspective, stated bluntly) you seem to be basically uninterested in the mounting evidence that your entire life's work is a critical failure?
+
+(Ruby's profile says he's one of two people to have volunteered for CfAR on three continents. If this is the level of performance we can expect from a veteran CfAR participant, what is CfAR for? Like, yes, we've succeeded at getting money and eyeballs pointed at the alignment problem, which is good. But I pretty clearly remember back in 2009 that we thought this was going to take more than money and eyeballs. If we were wrong, that's good news for humanity! But there should be some sort of argument that explains why we changed our minds, rather than everyone seemingly just conveniently forgetting the original vision.)
+
+Really interested in spending time with either of you in person to double-crux on some of this! (Or more Friendship maintenance with Anna.)
+
+> People sometimes think of it as "a math problem", but that's because of the branding, not because of the reality. Yes, a lot of it is a math problem, but it's also an analytic philosophy problem, a cognitive science problem, a social psychology problem, a sociobiology problem, a sociology of science problem, a political strategy problem, and so on. Many of these are politics-laden domains, where it isn't sufficient to merely be skeptical; it's also necessary to directly talk about politics without the conversation getting derailed.
+
+Jessica—
+> 1. GiveWell (talking about how current charities are inefficient/fraudelent/useless, and how it's possible to do better with cost-benefit analyses)
+> 2. The Atheism movement (talking about how religion is false and people who promote it are being deceptive)
+> 3. The fable of the dragon tyrant (talking about how deathists are being insane, worshipping death instead of fighting it)
+> 4. AI safety as a concern for people not in the AI field (talking about how current AGI approaches would result in the discussion of the world; isn't that saying that what AI researchers are doing is bad, and they won't figure out how to do better themselves?)
+> 5. The Sequences (talking about human rationality, including talking about lies and dark side epistemology)
+> 6. Inadequate Equilibria (talking about how e.g. academia is producing disinformation or useless research, due to perverse incentives)
+
+What I was trying to point out in the thread is a pattern where people say, "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any particular criticism as insufficiently tactful. More generally: the surface indicators of rationality are easy to fake, and if we don't know how to detect when someone is faking, then it might be surprisingly hard to notice if fake versions of "rationality" are memetically outcompeting better versions. (To which a standard reply seems to go: "Ah, but the behavior of claiming that the surface indicators are easy to fake and that you know how to do better is also easy to fake." Well, yes. That's why this is a hard problem.)
+
+
+
+Ben and Jessica, what did you get out of yesterday's meeting? Do you even remember what anyone said?
+
+I think it was Vaniver who brought up "sanity-maximizers" and admitted that that's not what "the team" is trying to do (because Society would destroy it for heresy, like Anna has told me before)? I like Vaniver, who seemed to "get" me.
+
+For me and my selfish perspective, I think the main outcome is finally shattering my "rationalist" social identity. (We knew this was coming, and I still need to mourn, but I needed to exhaust all possible avenues of appeal before it became real to me; this is the first morning where "rationalists ... them" feels more natural than "rationalists ... us".)
+
+I hope I wasn't too emotional, but I guess I had nothing to lose. I have my friends and I have money, and if Oli thinks I'm annoying, I guess I can live with that. (I mean, that which can be destroyed by the truth should be: I am, in fact, annoying.)
+
+Ben—
+
+> Seems like Oli would rather have ongoing internal political disagreements than make Lesswrong anything in particular. Vaniver actually believes that they should behave esoterically. Jim claimed that he updated during the meeting towards believing that norms had in fact shifted - possible he’ll make more downstream updates later. I don’t know how to characterize Ray.
+
+> Everyone agreed that Zack was right and Scott wrong on the substantive issue of category boundaries, and I think agreed with my characterization of Scott as conflating basic decency with giving into extortion.
+
+> There basically seems like there is no appetite for becoming the sort of counterparty we could negotiate with, and it seems like they think we’re probably right to leave.
+
+I actually still feel pretty motivated to write for Less Wrong, but now more in the spirit of "marketing channel for my own writing" and "fuck you guys" rather than my previous hope of recapturing the beacon.
+
+I was really grateful to Jim for piping up with, "We are somewhat cowardly about what we curate" (concerning why "Where to Draw the Boundaries?" didn't get Curated). (I'm not sure if he actually used the word "cowardly", but that was the sense.) I was like, "Thanks, that makes me respect you more!" (Ray was maintaining that my pedagogy wasn't good enough.)