Thus, if the extension of common words like 'woman' and 'man' is an issue of epistemic importance that rationalists should care about, then presumably so was Twitter's anti-misgendering policy—and if it _isn't_ (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I wasn't sure what was _left_ of the "Human's Guide to Words" Sequence if the [37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) needed to be retracted.
-I think I _am_ standing in defense of truth when I have an _argument_ for _why_ my preferred word usage does a better job at "carving reality at the joints", and the one bringing my usage explicitly into question doesn't have such an argument. As such, I didn't see the _practical_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", and "I can define a word any way I want." About which, again, a previous Eliezer Yudkowsky had written:
+I think I _am_ standing in defense of truth when I have an _argument_ for _why_ my preferred word usage does a better job at "carving reality at the joints", and the one bringing my usage explicitly into question doesn't have such an argument. As such, I didn't see the _practical_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", and "I can define a word any way I want." About which, again, an earlier Eliezer Yudkowsky had written:
> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
>
Yudkowsky probably didn't think much of _Atlas Shrugged_ (judging by [an offhand remark by our protagonist in _Harry Potter and the Methods_](http://www.hpmor.com/chapter/20)), but I kept thinking of the scene[^atlas-shrugged] where our heroine Dagny Taggart entreats the great Dr. Robert Stadler to denounce [an egregiously deceptive but technically-not-lying statement](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly) by the State Science Institute, whose legitimacy derives from its association with his name. Stadler has become cynical in his old age and demurs, disclaiming all responsibility: "I can't help what people think—if they think at all!" ... "How can one deal in truth when one deals with the public?"
-[^atlas-shrugged]: In Part One, Chapter VII, "The Exploiters and the Exploited"
+[^atlas-shrugged]: In Part One, Chapter VII, "The Exploiters and the Exploited".
At this point, I still trusted Yudkowsky to do better than an Ayn Rand villain; I had faith that _Eliezer Yudkowsky_ could deal in truth when he deals with the public.
Ziz recounted [her](/2019/Oct/self-identity-is-a-schelling-point/) story [of Anna's discrimination](https://sinceriously.fyi/net-negative), how she engaged in [conceptual warfare](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) to falsely portray Ziz as a predatory male. I was unimpressed: in my worldview, I didn't think Ziz had the right to say "I'm not a man," and expect people to just believe that. (I remember at one point, Ziz answered a question with, "Because I don't run off masochistic self-doubt like you." I replied, "That's fair.") But I did respect how Ziz actually believed in an intersex brain theory: in Ziz and Gwen's worldview, people's genders were a _fact_ of the matter, not just a manipulation of consensus categories to make people happy.
-Probably the most ultimately significant part of this meeting for future events was Michael verbally confirming to Ziz that MIRI had settled with a disgruntled former employee, Louie Helm, who had put up a website slandering them. I don't actually know the details of the alleged settlement. (I'm working off of [Ziz's notes](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) rather than particularly remembering that part of the conversation clearly myself; I don't know what Michael knew.) What was significant was that if MIRI _had_ paid the former employee as part of an agreement to get the slanderous website taken down, then, whatever the nonprofit best-practice books might have said about whether this was a wise thing to do when facing a dispute from a former employee, that would decision-theoretically amount to a blackmail payout, which seemed to contradict MIRI's advocacy of timeless decision theories (according to which you [shouldn't be the kind of agent that yields to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/)).
+Probably the most ultimately significant part of this meeting for future events was Michael verbally confirming to Ziz that MIRI had settled with a disgruntled former employee, Louie Helm, who had put up a website slandering them. I don't actually know the details of the alleged settlement. (I'm working off of [Ziz's notes](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) rather than particularly remembering that part of the conversation clearly myself; I don't know what Michael knew.) What was significant was that if MIRI _had_ paid Helm as part of an agreement to get the slanderous website taken down, then, whatever the nonprofit best-practice books might have said about whether this was a wise thing to do when facing a dispute from a former employee, that would decision-theoretically amount to a blackmail payout, which seemed to contradict MIRI's advocacy of timeless decision theories (according to which you [shouldn't be the kind of agent that yields to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/)).
----
Despite Math and Wellness Month and my "intent" to take a break from the religious civil war, I kept reading _Less Wrong_ during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness).
-MIRI researcher Scott Garabrant wrote a post on the theme of how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no).
+MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if there was some chance that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and commented about it: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful.
+[TODO: explain scuffle on "Yes Requires the Possibility"—
-(Information-theoretically, a signal sent with probability one transmits no information: you only learn something from observing the outcome if it could have gone the other way.)
-
-
-
-
-
-
-
-[TODO: tussle on new _Less Wrong_ FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk
-
-A draft of a new _Less Wrong_ FAQ was to include a link to "... Not Man for the Categories".
-
+ * Vanessa comment on hobbyhorses and feeling attacked
+ * my reply about philosophy got politicized, and MDL/atheism analogy
+ * Ben vs. Said on political speech and meta-attacks; Goldenberg on feelings
+ * 139-comment trainwreck got so bad, the mods manually moved the comments into their own thread https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
+ * based on the karma scores and what was said, this went pretty well for me and I count it as a victory
+
]
+On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite _almost literally_ any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: _either_ Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, _or_ one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The mod [was persuaded on reflection](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
+[TODO:
+"victories" weren't comforting when I resented this becoming a political slapfight at all—a lot of the objections in the Vanessa thread were utterly insane
+I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war) ]
-I saw an analogy to my thesis about categories: to say that _x_ belongs to category _C_ is meaningful because _C_ imposes truth conditions; just defining _x_ to be a _C_ by fiat would be uninformative.
+In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software."
-https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
+[TODO: I don't know if you caught the shitshow on Less Wrong, but isn't it terrifying that the person who objected was a goddamned _MIRI research associate_ ... not to demonize Vanessa because I was just as bad (if not worse) in 2008 (/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard#hair-trigger-antisexism), but in 2008 we had a culture that could _beat it out of me_]
-the intent of "MIRI Research Associate ... doesn't that terrify you" is not to demonize or scapegoat Vanessa, because I was just as bad (if not worse) in 2008, but in 2008 we had a culture that could _beat it out of me_
+[TODO: Steven's objection:
+> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham.
+I agreed that tractability needs to be addressed, but ...
+]
+I felt like—we were in a coal-mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission.
-In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software."
+And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect _you_ to care. The problem is not the dead bird; the problem is what the bird is _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue?
-Steven's objection:
-> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham.
+Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break.
-we're in a coal-mine, and my favorite one of our canaries just died, and I'm freaking out about this, and Anna/Scott/Eliezer/you are like, "Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission." And I'm like, "I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect you to care. The problem is not the dead bird; the problem is what the bird is evidence of." Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue?
+[TODO:
+ * I had posted a linkpost to "No, it's not The Incentives—it's You", which generated a lot of discussion, and Jessica (17 June) identified Ray's comments as the last straw.
+
+> LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win.
+>
+> Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it).
+
+ * posting on Less Wrong was harm-reduction; the only way to get people to stick up for truth would be to convert them to _a whole new worldview_; Jessica proposed the idea of a new discussion forum
+ * Ben thought that trying to discuss with the other mods would be a good intermediate step, after we clarified to ourselves what was going on; talking to other mods might be "good practice in the same way that the Eliezer initiative was good practice"; Ben is less optimistic about harm reduction; "Drowning Children Are Rare" was barely net-upvoted, and participating was endorsing the karma and curation systems
+ * David Xu's comment on "The Incentives" seems important?
+ * secret posse member: Ray's attitude on "Is being good costly?"
+ * Jessica: scortched-earth campaign should mostly be in meatspace social reality
+ * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
+
+> I'm also not sure if I'm sufficiently clued in to what Ben and Jessica are modeling as Blight, a coherent problem, as opposed to two or six individual incidents that seem really egregious in a vaguely similar way that seems like it would have been less likely in 2009??
+
+ * _Atlas Shrugged_ Bill Brent vs. Dave Mitchum scene
+ * Vassar: "Literally nothing Ben is doing is as aggressive as the basic 101 pitch for EA."
+ * Ben: we should be creating clarity about "position X is not a strawman within the group", rather than trying to scapegoat individuals
+ * my scuffle with Ruby on "Causal vs. Social Reality"
+ * it gets worse: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality#NbrPdyBFPi4hj5zQW
+ * Ben's comment: "Wow, he's really overtly arguing that people should lie to him to protect his feelings."
+ * Jessica: "tone arguments are always about privileged people protecting their feelings, and are thus in bad faith. Therefore, engaging with a tone argument as if it's in good faith is a fool's game, like playing chess with a pigeon. Either don't engage, or seek to embarrass them intentionally."
+ * there's no point at being mad at MOPs
+ * me (1 Jul): I'm a _little bit_ mad, because I specialize in cognitive and discourse strategies that are _extremely susceptible_ to being trolled like this
+ * "collaborative truth seeking" but (as Michael pointed out) politeness looks nothing like Aumann agreement
+ * 2 Jul: Jessica is surprised by how well "Self-consciousness wants to make everything about itself" worked; theory about people not wanting to be held to standards that others aren't being held to
+ * Michael: Jessica's example made it clear she was on the side of social justice
+ * secret posse member: level of social-justice talk makes me not want to interact with this post in any way
+]
-such is the way of the world; what can you do when you have to work with people?" But like ... I thought part of our founding premise was that the existing way of the world wasn't good enough to solve the really hard problem?
+[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+[TODO: "AI Timelines Scam"
+ * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions
+ * Ben: "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
+ * 11 Jul: I think the law does count _mens rea_ as a thing: we do discriminate between vehicular manslaughter and first-degree murder, because traffic accidents are less disincentivizable than offing one's enemies
+ * call with Michael about GiveWell vs. the Pope
]
+[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.]
-[TODO: 17– Jun, "LessWrong.com is dead to me" in response to "It's Not the Incentives", comment on Ray's behavior, "If clarity seems like death to them and like life to us"; Bill Brent, "Casual vs. Social Reality", I met with Ray 29 Jun; https://www.greaterwrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself ; calling out the abstract pattern]
-
-[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+[TODO: "progress towards discussing the real thing"
+ * Jessica acks Ray's point of "why are you using court language if you don't intend to blame/punish"
+ * Michael 20 Jul: court language is our way of saying non-engagement isn't an option
+ * Michael: we need to get better at using SJW blamey language
+ * secret posse member: that's you-have-become-the-abyss terrifying suggestion
+ * Ben thinks SJW blame is obviously good
+]
-[TODO: "AI Timelines Scam", within-group debate on what is a "scam" or "fraud", Pope]
+[TODO: epistemic defense meeting;
+ * I ended up crying at one point and left the room for while
+ * Jessica's summary: "Zack was a helpful emotionally expressive and articulate victim. It seemed like there was consensus that "yeah, it would be better if people like Zack could be warned somehow that LW isn't doing the general sanity-maximization thing anymore"."
+ * Vaniver admitting LW is more of a recruiting funnel for MIRI
+ * I needed to exhaust all possible avenues of appeal before it became real to me; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"
+]
-[TODO: epistemic defense meeting; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"]
[TODO: Michael Vassar and the theory of optimal gossip; make sure to include the part about Michael threatening to sue]