I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that _whether or not I should cut my dick off_ would _become_ a political issue. That was _new evidence_ about whether the original vision was wise! I wasn't trying to do politics with my idiosyncratic special interest; I was trying to _think seriously_ about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake, then the 2019-era "rationalists" were _worse than useless_ to me personally. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that includes AI researchers).
-Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology). Anna in particular was unusually good at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You _can't_ optimize your group's culture for not-talking-about-atheism without also optimizing against understanding Occam's razor; you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology). Anna in particular was unusually good at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
-[TODO: tussle on "Yes Implies the Possibility of No" https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
-viaWeb was different from CfAR ]
+[TODO: tussle on "Yes Implies the Possibility of No"
-[TODO: tussle on new _Less Wrong_ FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk ]
+MIRI researcher Scott Garabrant had written a post on the theme of how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). (Information-theoretically, a signal sent with probability one transmits no information: you only learn something from observing the outcome if it could have gone the other way.) I saw an analogy to my thesis about categories: to say that _x_ belongs to category _C_ is meaningful because _C_ imposes truth conditions; just defining _x_ to be a _C_ by fiat would be uninformative.
+
+https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
+
+the intent of "MIRI Research Associate ... doesn't that terrify you" is not to demonize or scapegoat Vanessa, because I was just as bad (if not worse) in 2008, but in 2008 we had a culture that could _beat it out of me_
+
+Was "hidden Bayesian structure of Science that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory)" part of the Sequences a lie?
+
+In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software."
+
+Steven's objection:
+> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham.
+
+we're in a coal-mine, and my favorite one of our canaries just died, and I'm freaking out about this, and Anna/Scott/Eliezer/you are like, "Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission." And I'm like, "I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect you to care. The problem is not the dead bird; the problem is what the bird is evidence of." Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue?
+
+such is the way of the world; what can you do when you have to work with people?" But like ... I thought part of our founding premise was that the existing way of the world wasn't good enough to solve the really hard problem?
+
+]
+
+[TODO: tussle on new _Less Wrong_ FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk
+
+A draft of a new _Less Wrong_ FAQ was to include a link to "... Not Man for the Categories".
-[TODO: more philosophy-of-language blogging! and bitter grief comments
-https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7
-https://www.greaterwrong.com/posts/FT9Lkoyd5DcCoPMYQ/partial-summary-of-debate-with-benquo-and-jessicata-pt-1/comment/vPekZcouSruiCco3c
]
[TODO: 17– Jun, "LessWrong.com is dead to me" in response to "It's Not the Incentives", comment on Ray's behavior, "If clarity seems like death to them and like life to us"; Bill Brent, "Casual vs. Social Reality", I met with Ray 29 Jun; https://www.greaterwrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself ; calling out the abstract pattern]
[TODO: Michael Vassar and the theory of optimal gossip; make sure to include the part about Michael threatening to sue]
-[TODO: various tussling with Steven Kaas]
+[TODO: transition about still having trouble with memoir? double-check Git log to make sure this is right chronologically]
+
+I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging!
+
+In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
+
+[TODO— more blogging 2019
+
+"Algorithms of Deception!" Oct 2019
+
+"Maybe Lying Doesn't Exist" Oct 2019
+
+I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!
+
+"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I
+
+"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans
+]
[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing
15 Sep Glen Weyl apology
With internet available—
-_ my comment about having changed my mind about "A Fable of Science and Politics"
_ what is the dictionary definition of _gestalt_ (maybe link it?)
+_ better examples from "Yes Requires the Possibility of No"
+_ my comment about having changed my mind about "A Fable of Science and Politics"
_ me remarking to "Wilhelm" that I think I met Vanessa at Solstice once, commented on Greg Egan
_ debate with Benquo and Jessicata
_ more Yudkowsky Facebook comment screenshots
_ Alicorn: about privacy, and for Melkor Glowfic reference link
_ someone from Alicorner #drama as a hostile prereader (Swimmer?)
_ maybe Kelsey (very briefly, just about her name)?
-_ maybe SK (briefly about his name)?
-(probably don't bother with Michael?)
+_ maybe SK (briefly about his name)? (the memoir might have the opposite problem (too long) from my hostile-shorthand Twitter snipes)
+(maybe don't bother with Michael?)
things to bring up in consultation emails—
_ dropping "and Scott" in Jessica's description of attacking narcissim
-
Yudkowsky did [quote-Tweet Colin Wright on the univariate fallacy](https://twitter.com/ESYudkowsky/status/1124757043997372416)
(which I got to [cite in a _Less Wrong_ post](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy)
(Subject: "Michael Vassar and the theory of optimal gossip")
-Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.)
-
-I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging!
-
-In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
-
-[TODO— more blogging 2019
-
-"Algorithms of Deception!" Oct 2019
-
-"Maybe Lying Doesn't Exist" Oct 2019
-
-I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!
-
-"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I
-
-"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans
-
-]
-
-
Scott said he liked "monastic rationalism _vs_. lay rationalism" as a frame for the schism Ben was proposing.
(I wish I could use this line)