> It was not the sight of Mitchum that made him sit still in horror. It was the realization that there was no one he could call to expose this thing and stop it—no superior anywhere on the line, from Colorado to Omaha to New York. They were in on it, all of them, they were doing the same, they had given Mitchum the lead and the method. It was Dave Mitchum who now belonged on this railroad and he, Bill Brent, who did not.
>
-> _Atlas Shrugged_ by Ayn Rand
+> —_Atlas Shrugged_ by Ayn Rand
+
+[TODO: recap of previous two posts]
[TODO: psychiatric disaster, breakup with Vassar group, this was really bad for me]
-On 13 February 2021, ["Silicon Valley's Safe Space"](https://archive.ph/zW6oX), the _New York Times_ piece on _Slate Star Codex_ came out. It was ... pretty lame? (_Just_ lame, not a masterfully vicious hit piece.) Metz did a mediocre job of explaining what our robot cult is about, while [pushing hard on the subtext](https://scottaaronson.blog/?p=5310) to make us look racist and sexist, occasionally resorting to odd constructions that are surprising to read from someone who has been a professional writer for decades. ("It was nominally a blog", Metz wrote of _Slate Star Codex_. ["Nominally"](https://en.wiktionary.org/wiki/nominally)?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed to me more like a critique of the "many"'s reading comprehension, rather than Alexander's writing.
+On 13 February 2021, ["Silicon Valley's Safe Space"](https://archive.ph/zW6oX), the _New York Times_ piece on _Slate Star Codex_ came out. It was ... pretty lame? (_Just_ lame, not a masterfully vicious hit piece.) Cade Metz did a mediocre job of explaining what our robot cult is about, while [pushing hard on the subtext](https://scottaaronson.blog/?p=5310) to make us look racist and sexist, occasionally resorting to odd constructions that were surprising to read from someone who had been a professional writer for decades. ("It was nominally a blog", Metz wrote of _Slate Star Codex_. ["Nominally"](https://en.wiktionary.org/wiki/nominally)?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed to me more like a critique of the "many"'s reading comprehension, rather than Alexander's writing.
-Although the many's poor reading comprehension may have served a protective function for Scott. A mob that can only attack you over things that look bad when quoted out of context, can't attack you over the meaning of "wordy, often roundabout" text that the mob can't read. The _Times_ article included this sleazy guilt-by-association attempt:
+Although the many's poor reading comprehension may have served a protective function for Scott. A mob that attacks you over things that look bad when quoted out of context, can't attack you over the meaning of "wordy, often roundabout" text that the mob can't read. The _Times_ article included this sleazy guilt-by-association attempt:
> In one post, [Alexander] [aligned himself with Charles Murray](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/), who proposed a link between race and I.Q. in "The Bell Curve." In another, he pointed out that Mr. Murray believes Black people "are genetically less intelligent than white people."[^sloppy]
-[^sloppy]: It was oddly sloppy of the _Times_ to link the first post, ["Three Great Articles On Poverty, And Why I Disagree With All Of Them"](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/), but not the second, ["Against Murderism"](https://slatestarcodex.com/2017/06/21/against-murderism/)—especially since "Against Murderism" is specifically about Alexander's reasons for being skeptical of "racism" as an explanatory concept, and therefore contains "objectively" more compelling sentences to quote out of context than a passing reference to Charles Murray. Apparently, the _Times_ couldn't even be bothered to smear Scott with misinterpretations of his actual ideas, if guilt-by-association did the trick with less effort on behalf of both journalist and reader.
+[^sloppy]: It was unevenly sloppy of the _Times_ to link the first post, ["Three Great Articles On Poverty, And Why I Disagree With All Of Them"](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/), but not the second, ["Against Murderism"](https://slatestarcodex.com/2017/06/21/against-murderism/)—especially since "Against Murderism" is specifically about Alexander's reasons for being skeptical of "racism" as an explanatory concept, and therefore contains "objectively" more damning sentences to quote out of context than a passing reference to Charles Murray. Apparently, the _Times_ couldn't even be bothered to smear Scott with misconstruals of his actual ideas, if guilt-by-association did the trick with less effort on behalf of both journalist and reader.
-But the sense in which Alexander "aligned himself with Murray" in ["Three Great Articles On Poverty, And Why I Disagree With All Of Them"](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/) in the context of a simplified taxonomy of views on alleviate poverty, doesn't imply agreement with Murray's views on heredity. (It being difficult to lift people up from poverty is difficult doesn't mean poverty is "genetic": a couple years earlier, Alexander wrote that ["Society Is Fixed, Biology Is Mutable"](https://slatestarcodex.com/2014/09/10/society-is-fixed-biology-is-mutable/).)
+But the sense in which Alexander "aligned himself with Murray" in ["Three Great Articles On Poverty, And Why I Disagree With All Of Them"](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/) in the context of a simplified taxonomy of views on alleviating poverty, doesn't automatically imply agreement with Murray's views on heredity. (A couple of years earlier, Alexander had written that ["Society Is Fixed, Biology Is Mutable"](https://slatestarcodex.com/2014/09/10/society-is-fixed-biology-is-mutable/): pessimism about our Society's ability to intervene to alleviate poverty, does not amount to the claim that poverty is "genetic.")
[Alexander's reply statement](https://astralcodexten.substack.com/p/statement-on-new-york-times-article) pointed out the _Times_'s obvious chicanery, but (I claim) introduced a distortion of its own—
It _is_ a weirdly brazen invalid _inference_. But by calling it a "falsehood", Alexander heavily implies this means he disagrees with Murray's offensive views on race: in invalidating the _Times_'s charge of guilt-by-association with Murray, Alexander validates Murray's guilt.
-But ... anyone who's actually read _and understood_ Scott's work should be able to infer that Scott probably finds genetically-mediated group differences plausible (as a value-free matter of empirical Science with no particular normative implications): his [review of Judith Rich Harris](https://archive.ph/Zy3EL) indicates that he accepts the evidence from twin studies for individual behavioral differences having a large genetic component, and section III. of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explantion for group differences (in the particular case of Ashkenazi Jewish intelligence).
+But ... anyone who's actually read _and understood_ Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically-mediated ancestry-group differences in socially-relevant traits (as a value-free matter of empirical Science with no particular normative implications): his [review of Judith Rich Harris](https://archive.ph/Zy3EL) indicates that he accepts the evidence from twin studies for individual behavioral differences having a large genetic component, and section III. of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explantion for group differences in the particular case of cognitive ability in Ashkenazi Jews.
-There are a lot of standard caveats that go here that Scott would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: [the mere existence of a group difference in a "heritable" trait doesn't itself imply a genetic cause of the group difference (because the groups' environments could also be different)](/2020/Apr/book-review-human-diversity/#heritability-caveats). It is without a doubt _entirely conceivable_ that the Ashkenazi IQ advantage is real and genetic, but black–white gap is fake and environmental.[^bet] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals.
+There are a lot of standard caveats that go here that Scott would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: [the mere existence of a group difference in a "heritable" trait doesn't itself imply a genetic cause of the group difference (because the groups' environments could also be different)](/2020/Apr/book-review-human-diversity/#heritability-caveats). It is without a doubt _entirely conceivable_ that the Ashkenazi IQ advantage is real and genetic, but black–white IQ gap is fake and environmental.[^bet] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals.
[^bet]: It's just—how much do you want to bet on that? How much do you think _Scott_ wants to bet?
-But ... anyone who's actually read _and understood_ Charles Murray's work, knows that Murray _also_ includes the standard caveats! (Even though the one about group differences not implying anything about individuals is [actually](/2020/Apr/book-review-human-diversity/#individuals-should-not-be-judged-by-the-average) [wrong](/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/).) The _Times_'s insinuation that Scott Alexander is a racist _like Charles Murray_ seems like a "[Gettier](https://en.wikipedia.org/wiki/Gettier_problem) attack": the charge is essentially correct, even though the evidence used to justify the charge to distracted _New York Times_ readers is completely bogus.
+But ... anyone who's actually read _and understood_ Charles Murray's work, knows that [Murray _also_ includes the standard caveats](/2020/Apr/book-review-human-diversity/#individuals-should-not-be-judged-by-the-average)! (Even though the one about group differences not implying anything about individuals is [actually wrong](/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/).) The _Times_'s insinuation that Scott Alexander is a racist _like Charles Murray_ seems like a "[Gettier](https://en.wikipedia.org/wiki/Gettier_problem) attack": the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted _New York Times_ readers is completely bogus.
-Why do I keep repeatedly bringing this up, that "rationalist" leaders almost certainly believe in cognitive race differences (even if it's hard to get them to publicly admit it in a form that's easy for _New York Times_ readers to decode)?
+Why do I keep repeatedly bringing this up, that "rationalist" leaders almost certainly believe in cognitive race differences (even if it's hard to get them to publicly admit it in a form that's easy to selectively quote in front of _New York Times_ reader)?
Because one of the things I noticed while trying to make sense of why my entire social circle suddenly decided in 2016 that guys like me could become women by means of saying so, is that in the conflict between the "rationalist" Caliphate and mainstream progressives, the "rationalists"' defensive strategy is one of deception.
I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than having to do with American politics or "the left" in particular. In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a [positive technical discipline](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) adjacent to game theory, not yoked to a particular normative agenda).[^logical-induction]
-[^logical-induction]: I sometimes wonder how hard it would have been to come up with MIRI's logical induction result (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market of increasingly complex traders) in the Soviet Union.
+[^logical-induction]: I sometimes wonder how hard it would have been to come up with MIRI's logical induction result (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market composed of increasingly complex traders) in the Soviet Union.
-Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful what they say in public. (I am not smart.)
+Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful about what they say in public. (I am not smart.)
-So the _New York Times_ implicitly accuses us of being racists, like Charles Murray. Instead of pointing out that being a racist _like Charles Murray_ is the obviously correct position that sensible people will tend to reach in the course of being sensible, we disingenuously deny everything.[^deny-everything]
+So the _New York Times_ implicitly accuses us of being racists, like Charles Murray, and instead of pointing out that being a racist _like Charles Murray_ is the obviously correct position that sensible people will tend to reach in the course of being sensible, we disingenuously deny everything.[^deny-everything]
[^deny-everything]: Or rather, people are distributed on a spectrum between disingenuously denying everything and sincerly accepting that Charles Murray is Actually Bad, with the older and more skilled among us skewed somewhat more towards disingenuous denial.
It works surprisingly well. I fear my love of Truth is not so great that if I didn't have Something to Protect, I would have happily participated in the cover-up.
-As it happens, in our world, the defensive cover-up consists of _throwing me under the bus_. Facing censure from the egregore for being insufficiently progressive, we can't defend ourselves ideologically. (_We_ think we're egalitarians, but progressives won't buy that because we like markets too much.) We can't point to our racial diversity. (Mostly white if not Jewish, with a scattering of Asians.) The sex balance is doing a little better after hybridizing with Tumblr and Effective Alruism (as [contrasted with the _Overcoming Bias_ days](/2017/Dec/a-common-misunderstanding-or-the-spirit-of-the-staircase-24-january-2009/)), but still isn't great.
+As it happens, in our world, the defensive cover-up consists of _throwing me under the bus_. Facing censure from the egregore for being insufficiently progressive, we can't defend ourselves ideologically. (_We_ think we're egalitarians, but progressives won't buy that because we like markets too much.) We can't point to our racial diversity. (Mostly white if not Jewish, with a generous handful of Asians, exactly as you'd expect from chapters 13 and 14 of _The Bell Curve_.) The sex balance is doing a little better after we hybridized with Tumblr and Effective Alruism (as [contrasted with the _Overcoming Bias_ days](/2017/Dec/a-common-misunderstanding-or-the-spirit-of-the-staircase-24-january-2009/)), but still isn't great.
-But _trans!_ We do have plenty of trans people to trot out as a shield! [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders." Scott Aaronson [noted (in commentary on the _Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" as something that would have "complicated the picture" of our portrayal as anti-feminist.
+But _trans!_ We do have plenty of trans people to trot out as a shield to definitively prove that we're not counter-revolutionary right-wing Bad Guys. Thus, [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson [noted (in commentary on the _Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" as something that would have "complicated the picture" of our portrayal as anti-feminist.
Even the _haters_ grudgingly give Alexander credit for "... Not Man for the Categories": ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least awknowledging you were wrong"](https://archive.is/SlJo1), wrote one.
-Under these circumstances, dethroning the supremacy of gender identity ideology is politically impossible. All our Overton margin is already being spent somewhere else; sanity on this topic is our [dump stat](https://tvtropes.org/pmwiki/pmwiki.php/Main/DumpStat). But this being the case, _I have no remaining reason to participate in the cover-up_. What's in it for me?
-
-On 17 February 2021, Topher Brennan [claimed on Twitter that](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857) Scott Alexander "isn't being honest about his history with the far-right", and published [an email he had received from Scott in 2014](https://emilkirkegaard.dk/en/2021/02/backstabber-brennan-knifes-scott-alexander-with-2014-email/), on what Scott thought some neoreactionaries were getting importantly right.
+Under these circumstances, dethroning the supremacy of gender identity ideology is politically impossible. All our Overton margin is already being spent somewhere else; sanity on this topic is our [dump stat](https://tvtropes.org/pmwiki/pmwiki.php/Main/DumpStat). But this being the case, _I have no reason to participate in the cover-up_. What's in it for me?
-I think to people who have actually read _and understood_ Scott's work, there is nothing particularly surprising or scandalous about the contents of the email. Scott says that biologically-mediated group differences are probably real, that neoreactionaries are the only people discussing the object-level hypotheses _or_ the meta-level question of why our Society's collective epistemology is falling down on this. He says that reactionaries as a whole generate a lot of garbage, but that he trusts himself to sift through the noise and extract the novel insights. (In contrast, RationalWiki didn't generate garbage, but by hewing so closely to the mainstream, it also didn't say much that Scott doesn't already know.)
+On 17 February 2021, Topher Brennan [claimed that](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857) Scott Alexander "isn't being honest about his history with the far-right", and published [an email he had received from Scott in February 2014](https://emilkirkegaard.dk/en/2021/02/backstabber-brennan-knifes-scott-alexander-with-2014-email/), on what Scott thought some neoreactionaries were getting importantly right.
-The email contains details that Scott hadn't already blog about—most notably the section on "My behavior is the most appropriate response to these facts", explaining his social strategizing—but none of it is really _surprising_ if you actually know Scott from his writing.
+I think that to people who have actually read _and understood_ Scott's work, there is nothing surprising or scandalous about the contents of the email. Scott said that biologically-mediated group differences are probably real, and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's collective epistemology is obfuscating this. He said that reactionaries as a whole generate a lot of garbage, but that he trusted himself to sift through the noise and extract the novel insights. (In contrast, RationalWiki didn't generate garbage, but by hewing so closely to the mainstream, it also didn't say much that Scott doesn't already know.) The email contains some details that Scott hadn't already blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing _vis á vis_ the neoreactionaries and his own popularity—but again, none of it is really _surprising_ if you know Scott from his writing.
-I think the main reason someone _would_ consider the email a scandalous revelation is if they hadn't read _Slate Star Codex_ that deeply—if their picture of Scott Alexander as a political writer was, "that guy who's _so_ committed to charity and discourse that he [wrote up an explanation of what _reactionaries_ (of all people) believe](https://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/)—and then, of course, turned around and wrote up the definitive explanation of why they're wrong and you shouldn't pay them any attention." As a first approximation, it's not a bad picture. But what it misses—what _Scott_ knows—is that charity isn't about putting on a show of superficially respecting your ideological opponent, before concluding that they were wrong and you were right all along in every detail. Charity is about seeing what the other guy is getting _right_.
+I think the main reason someone _would_ consider the email a scandalous revelation is if they hadn't read _Slate Star Codex_ that deeply—if their picture of Scott Alexander as a political writer was, "that guy who's _so_ committed to charitable discourse that he [wrote up an explanation of what _reactionaries_ (of all people) believe](https://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/)—and then, of course, turned around and wrote up the definitive explanation of why they're wrong and you shouldn't pay them any attention." As a first approximation, it's not a bad picture. But what it misses—what _Scott_ knows—is that charity isn't about putting on a show of superficially respecting your ideological opponent, before concluding (of course) that they were wrong and you were right all along in every detail. Charity is about seeing what the other guy is getting _right_.
-The same day, Yudkowsky published a Facebook post, which said
+The same day, Yudkowsky published a Facebook post which said:
> I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them and will break rules to do so; but in case it wasn't obvious, consider the point made explicitly. (Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)
-In response to comments, Yudkowsky edited the post several times to clarify that he perceived an obvious distinction between hate and heated criticism. The next day, frustrated at how the discussion seemed to ignoring the obvious political angle.
+I was annoyed at how the discussion seemed to be ignoring the obvious political angle, and the next day, I wrote [a comment](https://www.facebook.com/yudkowsky/posts/pfbid0WJ2h9CRnqzrenpccajdU6SYJkT4967KCstW5dqESt4ArJLjjGHY7yZMk6mjar15Sl?comment_id=10159410429909228) (which ended up yielding 49 Like and Heart reactions): I agreed that there was a grain of truth to the claim that our detractors hate us because they're evil bullies, but stopping the analysis there seemed _incredibly shallow and transparently self-serving_.
+
+If you listened to why _they_ said they hate us, it was because we were racist, sexist, transphobic fascists. The party-line response to seemed to be trending towards, "That's obviously false (Scott voted for Warren, look at all the social democrats on the _Less Wrong_/_Slate Star Codex_ surveys, _&c._); they're just using that as a convenient smear because they like bullying nerds."
+
+If "sexism" included "it's an empirical question whether innate statistical psychological sex differences of some magnitude exist, it empirically looks like they do, and this has implications about our social world" (as articulated in, for example, Alexander's ["Contra Grant on Exaggerated Differences"](https://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/)), then the "_Slate Star Codex_ _et al._ are crypto-sexists" charge was _absolutely correct_.
+
+You could plead, "That's a bad definition of sexism", but that's only convincing if you've _already_ been trained in the "use empiricism and open discussion to discover policies with utilitarian-desirable outcomes" tradition; the people with a California-public-school-social-studies-plus-Tumblr education didn't already _know_ that. ([_I_ didn't know this](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism) at age 18 back in 'aught-six, and we didn't even have Tumblr then.)
+
+In that light, you could see why someone might find "blow the whistle on people who are claiming to be innocent but are actually guilty (of thinking bad thoughts)" to be a more compelling ethical consideration than "respect confidentiality requests".
+
+Indeed, it seems important to notice (though I didn't at the time of my comment) that _Brennan didn't break any promises_. In [Brennan's account](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857), Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott _unilaterally_ said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.
+
+To see why the lack of a promise is significant, imagine if someone were guilty of a serious crime (like murder or stealing their customers' money), unilaterally confessed to an acquaintance, but added, "never tell anyone I said this, or I'll seek some sort of horrible revenge". In that case, I think more people's moral intuitions would side with the whistleblower and against "privacy."
+
+In the Brennan–Alexander case, I don't think Scott has anything to be ashamed of—but that's _because_ I don't think learning from right-wingers is a crime. If our _actual_ problem was "Genuinely consistent rationalism is realistically always going to be an enemy of the state, because [the map that fully reflects the territory is going to include facts that powerful coalitions would prefer to censor, no matter what specific ideology happens to be on top in a particular place and time](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)", but we _thought_ our problem was "We need to figure out how to exclude evil bullies", then we were in trouble!
+
+[TODO—
+ * Yudkowsky commented that everyone has an evil bullies problem, we also had a Kolmogorov problem, but that's a separate thing even if bullies use Kolmogorov as an attack vector
+ * reality is complicated: I put some weight on the evil bullies model, but I think it's important to notice that we're participating in a political cover-up
+ * really, Topher and I are trying to do the same thing (reveal that rationalist leaders are thoughtcriminals), for different reasons (Topher thinks thoughtcrime is bad, and I think thoughtcrime it's fraud to claim the banner of "rationality" while hiding your thoughtcrimes); I'm being more scrupulous about accomplishing my objective while respecting other people's privacy hang-ups (and I think I have more latitude to do so _because_ I'm pro-thoughtcrime; people can tell that I'm saying this selfishly rather than spitefully), but don't think I don't sympathize with Topher; there are non-evil-bully reasons to want to _reveal information_ rather than participate in a conspiracy to protect the "rationalists" as non-threatening to the egregore
+ * It's one thing to believe in keeping promsies that someone explicitly made, but instructing commenters not to link to the email, implies not just that Topher should keep his promises, but that _everyone else_ is bound to participate in a conspiracy to respect Scott's privacy
+]
+
+
+
+
+> Oh, maybe it's relevant to note that those posts were specifically part of my 21-month rage–grief campaign of being furious at Eliezer all day every day for lying-by-implicature about the philosophy of language? But, I don't want to seem petty by pointing that out! I'm over it!
+
+And I think I _would_ have been over it, except—
+
+... except that Yudkowsky _reopened the conversation_ four days later on 22 February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_."
+
+(_Why!?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging?)
-[TODO Brennan leak discussion cont'd ...]
-... except that Yudkowsky reopened the conversation in February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_."
-(_Why?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging?)
I explained what's wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), but I find myself still having more left to analyze. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind.
But ... the _reason_ he got a bit ("a bit") of private pushback was _because_ the original "hill of meaning" thread was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were in the position of defining a conlang from scratch); the _problem_ is using an argument that's ostensibly about pronouns to sneak in an implicature ("Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous") that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a _fact about reality_ that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves a fact (as if anyone had doubted this).
-In that context, it's revealing that in this post attempting to explain why the original thread seemed like a reasonable thing to say, Yudkowsky ... doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size."
+In that context, it's revealing that in this February 2021 post attempting to explain why the November 2018 thread seemed like a reasonable thing to say, Yudkowsky ... doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size."
But ... I've never _measured_ how big someone's gametes are, have you? We can only _infer_ whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Furthermore, for trans people who don't pass but are visibly trying to, one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex.
[^number-of-things]: Note the striking contrast between ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument), in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it"; and these 2021 Tweets, in which Yudkowsky seems completely nonchalant about "not have been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid.
- Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems dubious: if you're focused on sanity, you should be noticing sanity errors on both sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to proportionately _more_ errors on the "pro-Stalin" side.) Judging by local demographics, the rationale that "those people might matter to AGI someday" seems much _more_ likely to apply to trans women themselves, than their critics.
+ Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to proportionately _more_ errors on the "pro-Stalin" side.) Judging by local demographics, the rationale that "those people might matter to AGI someday" seems much _more_ likely to apply to trans women themselves, than their critics!
The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently stated by Scott Alexander (redacting the irrelevant object-level example):
Soares's points seemed cribbed from part I of Scott Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), which post I had just dedicated _more than three years of my life_ to rebutting in [increasing](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) [technical](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), _specifically using dolphins as my central example_—which Soares didn't necessarily have any reason to have known about, but Yudkowsky (who retweeted Soares) definitely did. (Soares's [specific reference to the Book of Jonah](https://twitter.com/So8res/status/1401670796997660675) made it seem particularly unlikely that he had invented the argument independently from Alexander.) [One of the replies (which Soares Liked) pointed out the similar _Slate Star Codex_ article](https://twitter.com/max_sixty/status/1401688892940509185), [as did](https://twitter.com/NisanVile/status/1401684128450367489) [a couple of](https://twitter.com/roblogic_/status/1401699930293432321) quote-Tweet discussions.
-I took this as another occasion to _flip out_. I didn't _immediately_ see anything for me to overtly object to in the thread itself—[I readily conceded that](https://twitter.com/zackmdavis/status/1402073131276066821) there was nothing necessarily wrong with wanting to use the symbol "fish" to refer to the cluster of similarities induced by convergent evolution to the acquatic habitat rather than the cluster of similarities induced by phylogenetic relatedness—but Soares and Yudkowsky implicitly lending more legtimacy to "... Not Man for the Categories" was _hostile to my interests_. Was I paranoid to read this as a potential [dogwhistle](https://en.wikipedia.org/wiki/Dog_whistle_(politics))? It just seemed _implausible_ that Soares would be Tweeting that dolphins are fish in the counterfactual in which "... Not Man for the Categories" had never been published.
+The elephant in my brain took this as another occasion to _flip out_. I didn't _immediately_ see anything for me to overtly object to in the thread itself—[I readily conceded that](https://twitter.com/zackmdavis/status/1402073131276066821) there was nothing necessarily wrong with wanting to use the symbol "fish" to refer to the cluster of similarities induced by convergent evolution to the acquatic habitat rather than the cluster of similarities induced by phylogenetic relatedness—but in the context of our subculture's history, I read this as Soares and Yudkowsky implicitly lending more legitimacy to "... Not Man for the Categories", which was _hostile to my interests_. Was I paranoid to read this as a potential [dogwhistle](https://en.wikipedia.org/wiki/Dog_whistle_(politics))? It just seemed _implausible_ that Soares would be Tweeting that dolphins are fish in the counterfactual in which "... Not Man for the Categories" had never been published.
-After a little more thought, I decided the thread _was_ overtly objectionable, and [quickly wrote up a reply on _Less Wrong_](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins): Soares wasn't merely advocating for a "swimmy animals" sense of the word _fish_, but specifically deriding phylogenetic definitions as unmotivated for everyday use, and _that_ was wrong. Genetics is at the root of the causal graph underlying all other features of an organism; creatures that are more closely evolutionarily related are more similar _in general_. Classifying things by evolutionary lineage isn't an arbitrary æsthetic whim by people who care about geneology for no reason; we need the natural category of "mammals (including marine mammals)" to make sense of how dolphins are warm-blooded, breathe air, and nurse their live-born young.
+After a little more thought, I decided the thread _was_ overtly objectionable, and [quickly wrote up a reply on _Less Wrong_](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins): Soares wasn't merely advocating for a "swimmy animals" sense of the word _fish_ to become more accepted usage, but specifically deriding phylogenetic definitions as unmotivated for everyday use ("definitional gynmastics [_sic_]"!), and _that_ was wrong. It's true that most language users don't directly care about evolutionary relatedness, but [words aren't identical with their definitions](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels). Genetics is at the root of the causal graph underlying all other features of an organism; creatures that are more closely evolutionarily related are more similar _in general_. Classifying things by evolutionary lineage isn't an arbitrary æsthetic whim by people who care about geneology for no reason. We need the natural category of "mammals (including marine mammals)" to make sense of how dolphins are warm-blooded, breathe air, and nurse their live-born young, and the natural category of "finned cold-blooded vertebrate gill-breathing swimmy animals (which excludes marine mammals)" is also something that it's reasonable to have a word for.
(Somehow, it felt appropriate to use a quote from Arthur Jensen's ["How Much Can We Boost IQ and Scholastic Achievement?"](https://en.wikipedia.org/wiki/How_Much_Can_We_Boost_IQ_and_Scholastic_Achievement%3F) as an epigraph.)
-[TODO: dolphin war con'td]
+[TODO: dolphin war con'td
+
+ * Nate conceded all of my points (https://twitter.com/So8res/status/1402888263593959433), said the thread was in jest ("shitposting"), and said he was open to arguments that he was making a mistake (https://twitter.com/So8res/status/1402889976438611968)
+ *
+
+
+
+
+
+
+]
[TODO:
And used in making so many beautiful things ...! The [microchips](https://en.wikipedia.org/wiki/Integrated_circuit) on which our electronic Society is built, obviously, but [silicone](https://en.wikipedia.org/wiki/Silicone) polymers also have a wide range of applications in industrial and consumer products.
-Today, I'd like to talk about some ... _consumer_ products.
+Today, I'd like to review some ... _consumer_ products.
-Except—this isn't actually a chemistry fanblog. If you'd rather avoid discussion of certain _consumer products_, you might want to stop reading and close the tab now.
+Except—this isn't actually a chemistry fanblog. If you'd rather avoid being spiritually contaminated by discussion of certain _consumer products_—specifically, pornography-equivalents for men who love women and want to become what they love—you might want to stop reading and close the tab now.
-[TODO: better intro/transition]
+Come back next month! We're not always like this; this is usually a blog about the _science and philosophy_ of autogynephilia, not the _practice_ of it. It's just—I need to host this page of product reviews somewhere, and because science and philosophy unfortunately require some amount of empirical data, this is the place.
+
+<p class="flower-break">⁕ ⁕ ⁕</p>
+
+... if you're still here (why?!), the reviews follow.
### [Gold Seal NAKED Silicone Bodysuit](https://thebreastformstore.com/gold-seal-naked-silicone-bodysuit/)
<a href="/images/bodysuit_zipper_error.jpg"><img src="/images/bodysuit_zipper_error.jpg" width="240" style="float: left; margin: 0.4pc;"></a>
-[One of the disappointing things about breastforms is that they're noticeably—not actually part of your body.](/2017/Sep/hormones-day-156-developments-doubts-and-pulling-the-plug-or-putting-the-cis-in-decision/#first-breastforms) There is an _edge_ between the form and your actual chest.
+One of the disappointing things about breastforms is that they're [noticeably—not actually part of your body.](/2017/Sep/hormones-day-156-developments-doubts-and-pulling-the-plug-or-putting-the-cis-in-decision/#first-breastforms) There is an _edge_ between the form and your actual chest.
In contrast, this bodysuit featuring fake breasts _and_ a fake vulva, all in one piece, seemed like an appealing thing to try out. Compared to breastforms, the bodysuit promised to offer both a "bottom" experience, and, not a more _seamless_ transformation on "top", but rather, to put the seams in a potentially less conspicuous location (at the neck/arms/legs, rather than on the chest where they interfere with the illusion of actually having breasts).
It only comes in one size, but according to the sizing chart, I ought to fit given my 37½″ underbust measurement.
-... I did not fit. It took a huge stuggle just to get it on at all, and I did substantial damage to the suit in the process, ripping a huge tear in the back (from the center to the left hip), _and_ derailing the zipper, _and_, somehow, detaching the sides of the zipper from the suit.
+... I did not fit. It took a huge stuggle just to get the suit on at all, and I did substantial damage to it in the process, ripping a huge tear in the back (from the center to the left hip), _and_ derailing the zipper, _and_, somehow, detaching the sides of the zipper from the suit (!?).
In retrospect, I should have taken care to heed the direction to apply talcum powder to my skin and the inside of the suit, before stuggling so much, and on that count, I'll easily accept the rip in the back as "my fault", but the way the zipper derailed and detached so easily seems like more of an indicator of a low-quality product?
- The cleavage is nice
+After the struggle
-The breasts hang significantly too low—but
+The cleavage is visually nice (no seam when looking own at one's chest!), but the breasts hung significantly too low—
I would describe the overall effect as "cartoony."
* hard to fit member in urination option, I swear it leaked urine out side?—not sure how the hydraulics worked there
+"The most jiggle and bounce imaginable"
+
* also: can't masturbate with the bottom on!
The product page had already warned that all sales of this item were final (would _you_ want to buy one of these used?)
* I like the idea, but overall, do not recommend
**Cost:** $600
-**Rating:** ★½
+**Rating:** ★
### [Crea FX Taylor Silicone Mask](https://www.creafx.com/en/special-make-up-effects/taylor-silicone-mask/)
<div style="float: right;">
-<a href="/images/mask_plain.jpg"><img src="/images/mask_plain.jpg" width="240" style="margin: 0.4pc;"></a>
-<a href="/images/mask_with_wig_and_rose-colored_glasses.jpg"><img src="/images/mask_with_wig_and_rose-colored_glasses.jpg" width="140" style="margin: 0.4pc;"></a>
+<a href="/images/mask_plain.jpg"><img src="/images/mask_plain.jpg" width="200" style="margin: 0.1pc;"></a>
+<a href="/images/mask_with_wig_and_rose-colored_glasses.jpg"><img src="/images/mask_with_wig_and_rose-colored_glasses.jpg" width="165" style="margin: 0.1pc;"></a>
</div>
It really looks like a woman's face!
* I didn't masturbate from Sat. until Wed. in anticipation of trying on the mask in a hotel room; I was delighted that I came a little bit (entirely flaccid) while shitting—that's a sign that I had succeeded in building up, and yet somehow I didn't feel very horny or get as much pleasure when the time came? Maybe I'm getting old?
-"The most jiggle and bounce imaginable"
\ No newline at end of file