From: M. Taylor Saotome-Westlake Date: Sun, 16 Oct 2022 01:25:50 +0000 (-0700) Subject: Satur-day memoir redemption session X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=cbece8280010a1283a1bc8c61e6ef2a94b2dd650;p=Ultimately_Untrue_Thought.git Satur-day memoir redemption session --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 7207e51..658fcd9 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -1,7 +1,7 @@ Title: A Hill of Validity in Defense of Meaning Date: 2023-01-01 11:00 Category: commentary -Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy +Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors Status: draft > If you are silent about your pain, they'll kill you and say you enjoyed it. @@ -154,12 +154,16 @@ If you were actually interested in having a real discussion (instead of a fake d Satire is a very weak form of argument: the one who wishes to doubt will always be able to find some aspect in which the obviously-absurd satirical situation differs from the real-world situation being satirized, and claim that that difference destroys the relevence of the joke. But on the off-chance that it might help _illustrate_ my objection, imagine you lived in a so-called "rationalist" subculture where conversations like this happened— +

⁕ ⁕ ⁕

+

Bob: Look at this adorable cat picture!

Alice: Um, that looks like a dog to me, actually.

Bob: You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then.

+

⁕ ⁕ ⁕

+ If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things (_e.g._, AI alignment), right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated philosophy-of-language knowledge to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob _obviously knows goddamned well what Alice was trying to say_; it's _incredibly_ obfuscatory in a way that people—the _same_ people—would not tolerate in almost _any_ other context. Imagine an Islamic theocracy in which one Meghan Murphee had recently gotten kicked off the dominant microblogging platform for speaking disrespectfully about the prophet Muhammad. Suppose that [Yudkowsky's analogue in that world](/2020/Aug/yarvin-on-less-wrong/) then posted that Murphee's supporters were ontologically confused to object on free inquiry grounds: [saying "peace be upon him" after the name of the prophet Muhammad](https://en.wikipedia.org/wiki/Islamic_honorifics#Applied_to_Muhammad_and_his_family) is a _speech act_, not a statement of fact. In banning Murphee for repeatedly speaking about the prophet Muhammad (peace be upon him) as if he were just some guy, the platform was merely ["enforcing a courtesy standard"](https://twitter.com/ESYudkowsky/status/1067302082481274880); Murphee wasn't being forced to _lie_. @@ -224,7 +228,7 @@ The _issue_ is that category boundaries are not arbitrary (if you care about int It's true that [the reason _I_ was continuing to freak out about this](/2019/Jul/the-source-of-our-power/) to the extent of sending him this obnoxious email telling him what to write (seriously, who does that?!) had to with transgender stuff, but wasn't the reason _Scott_ should care. -The other year, Alexander had written a post, ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/), explaining the consequences of political censorship by means of an allegory about a Society with the dogma that thunder occurs before lightning. (The title was a [pun](https://en.wikipedia.org/wiki/Kolmogorov_complexity) referencing Scott Aaronson's post advocating ["The Kolmogorov Option"](https://www.scottaaronson.com/blog/?p=3376), serving the cause of Truth by cultivating a bubble that focuses on specific truths that won't get you in trouble with the local political authorities. This after the Soviet mathematician Andrey Kolmogorov, who _knew better than to pick fights he couldn't win_.) Alexander had explained that the problem with Kolmogorov Option strategies isn't so much the sacred dogma itself (it's not often that you need to _directly_ make use of the fact that lightning comes first), but that [the need to _defend_ the sacred dogma](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) [_destroys everyone's ability to think_](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology). +The other year, Alexander had written a post, ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/), explaining the consequences of political censorship by means of an allegory about a Society with the dogma that thunder occurs before lightning. (The title was a [pun](https://en.wikipedia.org/wiki/Kolmogorov_complexity) referencing computer scientist Scott Aaronson's post advocating ["The Kolmogorov Option"](https://www.scottaaronson.com/blog/?p=3376), serving the cause of Truth by cultivating a bubble that focuses on specific truths that won't get you in trouble with the local political authorities. This after the Soviet mathematician Andrey Kolmogorov, who _knew better than to pick fights he couldn't win_.) Alexander had explained that the problem with Kolmogorov Option strategies isn't so much the sacred dogma itself (it's not often that you need to _directly_ make use of the fact that lightning comes first), but that [the need to _defend_ the sacred dogma](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) [_destroys everyone's ability to think_](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology). It was the same thing here. It wasn't that I had any direct practical need to misgender anyone in particular. It still wasn't okay that trying to talk about the reality of biological sex to so-called "rationalists" got you an endless deluge of—polite! charitable! non-ostracism-threatening!—_bullshit nitpicking_. (What about [complete androgen insensitivity syndrome](https://en.wikipedia.org/wiki/Complete_androgen_insensitivity_syndrome)? Why doesn't this ludicrous misinterpretation of what you said [imply that lesbians aren't women](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/)? _&c. ad infinitum_.) With enough time, I thought the nitpicks could and should be satisfactorily answered. (Any ones that couldn't would presumably be fatal criticisms rather than bullshit nitpicks.) But while I was in the process of continuing to write all that up, I hoped Alexander could see why I feel somewhat gaslighted. @@ -561,33 +565,37 @@ _Should_ I have known that it wouldn't work? _Didn't_ I "already know", at some I guess in retrospect, the outcome does seem kind of "obvious"—that it should have been possible to predict in advance, and to make the corresponding update without so much fuss and wasting so many people's time. -But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year. Maybe this seems banal if you haven't spent your entire adult life in his robot cult? Coming from _anyone else in the world_, I wouldn't have had a problem with the "hill of validity in defense of meaning" thread—I have considered it a solidly above-average philosophy performance, before [setting the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to) on the author and getting on with my day. But since I _did_ spend my entire adult life in Yudkowsky's robot cult, trusting him the way a Catholic trusts the Pope, I _had_ to assume that it was an "honest mistake" in his rationality lessons, and that honest mistakes could be honestly corrected if someone put in the effort to explain the problem. The idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year, was not really in my hypothesis space. It took some _very large_ likelihood ratios to beat it into my head the thing that was obviously happenening, was actually happening. +But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year.[^any-other-public-intellectual] + +[^any-other-public-intellectual]: And really, that's the _charitable_ interpretation. The extent to which I still have trouble entertaining the idea that Yudkowsky _acutally_ drunk the gender ideology Kool-Aid, rather than merely having pretended to, is a testament to the thoroughness of my indoctrination. + +Maybe this seems banal if you haven't spent your entire adult life in his robot cult? Coming from _anyone else in the world_, I wouldn't have had a problem with the "hill of validity in defense of meaning" thread—I have respected it a solidly above-average philosophy performance, before [setting the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to) on the author and getting on with my day. But since I _did_ spend my entire adult life in Yudkowsky's robot cult, trusting him the way a Catholic trusts the Pope, I _had_ to assume that it was an "honest mistake" in his rationality lessons, and that honest mistakes could be honestly corrected if someone put in the effort to explain the problem. The idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year, was not really in my hypothesis space. It took some _very large_ likelihood ratios to beat it into my head the thing that was obviously happenening, was actually happening. Ben shared the account of our posse's email campaign with someone, who commented that I had "sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys." That is, if I had been brave enough to confront Yudkowsky by myself, _maybe_ there was some hope of him seeing that the game he was playing was wrong. But because I was so cowardly as to need social proof (because I believed that an ordinary programmer such as me was as a mere worm in the presence of the great Eliezer Yudkowsky), it must have just looked to him like an illegible social plot originating from Michael. One might wonder why this was such a big deal to us. Okay, so Yudkowsky had prevaricated about his own philosophy of language for transparently political reasons, and couldn't be moved to clarify in public even after me and my posse spent an enormous amount of effort trying to explain the problem. So what? Aren't people wrong on the internet all the time? -Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. If the claim was _true_, it was important to make, and to actually extract that labor. "Work for me or the world ends badly," basically. +Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. "Work for me or the world ends badly," basically. If the claim was _true_, it was important to make, and to actually extract that labor. But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to talk to him privately, then we had a right to talk in public about what we thought was going on. -This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine was doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists. +This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine and its operators were doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists. Ben compared the whole set-up to that of Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the _initial intent_, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust. ------- -Given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing, and the maintainers of the marketing machine for the old thing. +Given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing, and the operators of the marketing machine for the old thing. _I_ had been hyperfocused on prosecuting my Category War, but the reason Michael and Ben and Jessica were willing to help me out on that, was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a _more general_ problem of epistemic rot in "the community". Ben had [previously](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [written](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) a lot [about](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) [problems](http://benjaminrosshoffman.com/against-responsibility/) [with](http://benjaminrosshoffman.com/against-neglectedness/) Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? -I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_. +I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on our brain's pattern-matching faculties to sum over thousands of little micro-actions as a _gestalt_, rather than having the kind of robust, precise representation a well-designed AI could compute with. -Ben called it the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world, using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. +Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. -When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." +When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." This seemed to me like the sort of thing where a particularly principled (naive?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise." @@ -595,7 +603,9 @@ I thought explaining the Blight to an ordinary grown-up was going to need _eithe The schism introduced new pressures on my social life. On 20 April, I told Michael that I still wanted to be friends with people on both sides of the factional schism (in the frame where recent events were construed as a factional schism), even though I was on this side. Michael said that we should unambiguously regard Anna and Eliezer as criminals or enemy combatants (!!), that could claim no rights in regards to me or him. -I don't think I "got" the framing at this time. War metaphors sounded Scary and Mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ particularly morally blameworthy as individuals: their actions are just being controlled by the Power they're embedded in. +I don't think I "got" the framing at this time. War metaphors sounded Scary and Mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ particularly morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in. + +[^soldiers]: At least, not blameworthy _in the same way_ as someone who committed the same violence as an individual. I wrote to Anna: @@ -813,15 +823,70 @@ is make this simple thing established "rationalist" knowledge: > > This is **literally _modus ponens_**. I don't understand how you expect people to trust you to save the world with a research community that _literally cannot perform modus ponens._ > -> [redacted ...] See, I thought you were playing on the chessboard of _being correct about rationality_. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and [redacted] _and Jessica_ wrote to you about this and explained the problem in _painstaking_ detail [... redacted ...] Why? **Why is this so hard?!** +> [redacted ...] See, I thought you were playing on the chessboard of _being correct about rationality_. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and [redacted] _and Jessica_ wrote to you about this and explained the problem in _painstaking_ detail, **and you stonewalled us.** Why? **Why is this so hard?!** > > [redacted] > > No. The thing that's been driving me nuts for twenty-one months is that I expected Eliezer Yudkowsky to tell the truth. I remain, > > Your heartbroken student, +> [...] + +I followed it up with another email after I woke up the next morning: + +> To: Eliezer Yudkowsky <[redacted]> +> Cc: Anna Salamon <[redacted]> +> Date: 13 September 2020 11:02 _a.m._ +> Subject: Re: out of patience +> +> [... redacted] The sinful and corrupted part wasn't the _initial_ Tweets; the sinful and corrupted part is this **bullshit stonewalling** when your Twitter followers and me and Michael and Ben and Sarah and [redacted] and Jessica tried to point out the problem. I've _never_ been arguing against your private universe [... redacted]; the thing I'm arguing against in ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (and **my [unfinished draft sequel](https://github.com/zackmdavis/Category_War/blob/cefa98c3abe/unnatural_categories_are_optimized_for_deception.md)**, although that's more focused on what Scott wrote) is the **_actual text_ you _actually published_, not your private universe.** +> +> [... redacted] you could just **publicly clarify your position on the philosophy of language** the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?! +> +> You wrote: +> +>> [Using language in a way](https://twitter.com/ESYudkowsky/status/1067291243728650243) _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. +> +>> [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to. +> +> The problem with "it's a policy debate about how to use language" is that it completely elides the issue that some ways of using language _perform better_ at communicating information, such that **attempts to define new words or new senses of _existing_ words should come with a justification for why the new sense is _useful for conveying information_, and that _is_ a matter of Truth.** Without such a justification, it's hard to see why you would _want_ to redefine a word _except_ to mislead people with strategic equivocation. +> +> It is _literally true_ that Eliezer Yudkowsky is a white supremacist (if I'm allowed to define "white supremacist" to include "someone who [once linked to the 'Race and intelligence' _Wikipedia_ page](https://www.lesswrong.com/posts/faHbrHuPziFH7Ef7p/why-are-individual-iq-differences-ok) in a context that implied that it's an empirical question"). +> +> It is _literally true_ that 2 + 2 = 6 (if I'm allowed to define '2' as •••-many). +> +> You wrote: +> +>> [The more technology advances, the further](https://twitter.com/ESYudkowsky/status/1067490362225156096) we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts. +> +> That's kind of like defining Solomonoff induction, and then saying, "Having said this, we've built AGI." No, you haven't said all the facts! Configuration space is _very high-dimensional_; we don't have _access_ to the individual points. Trying to specify the individual points ("say all the facts") would be like what you wrote about in ["Empty Labels"](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)—"not just that I can vary the label, but that I can get along just fine without any label at all." Since that's not possible, we need to group points into the space together so that we can use observations from the coordinates that we _have_ observed to make probabilistic inferences about the coordinates we haven't. But there are _mathematical laws_ governing how well different groupings perform, and those laws _are_ a matter of Truth, not a mere policy debate. +> +> [... redacted ...] +> +> But if behavior at equilibrium isn't deceptive, there's just _no such thing as deception_; I wrote about this on Less Wrong in ["Maybe Lying Can't Exist?!"](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) (drawing on the academic literature about sender–reciever games). I don't think you actually want to bite that bullet? +> +> **In terms of information transfer, there is an isomorphism between saying "I reserve the right to lie 5% of the time about whether something is a member of category C" and adopting a new definition of C that misclassifies 5% of instances with respect to the old definition.** +> +> Like, I get that you're ostensibly supposed to be saving the world and you don't want randos yelling at you in your email about philosophy. But **I thought the idea was that we were going to save the world [_by means of_ doing unusually clear thinking?**](https://arbital.greaterwrong.com/p/executable_philosophy) +> +> [Scott wrote](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (with an irrelevant object-level example redacted): "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life." (Okay, he added a clarification after I spent Christmas yelling at him; but I think he's still substantially confused in ways that I address in my forthcoming draft post.) +> +> [You wrote](https://twitter.com/ESYudkowsky/status/1067198993485058048): "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning." +> +> I think I've argued pretty extensively this is wrong! **I'm eager to hear counterarguments if you think I'm getting the philosophy wrong.** But ... **"people live in different private universes" is _not a counterargument_.** +> +> **It makes sense that you don't want to get involved in gender politics. That's why I wrote "... Boundaries?" using examples about dolphins and job titles, and why my forthcoming post has examples about bleggs and artificial meat.** This shouldn't be _expensive_ to clear up?! This should take like, five minutes? (I've spent twenty-one months of my life on this.) Just one little _ex cathedra_ comment on Less Wrong or _somewhere_ (**it doesn't have to be my post, if it's too long or I don't deserve credit or whatever**; I just think the right answer needs to be public) affirming that you haven't changed your mind about 37 Ways Words Can Be Wrong? Unless you _have_ changed your mind, of course? +> +> I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the _real_ reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer _already_ threw you a bone with the ['there's probably more than one type of dypshoria' thing.](https://twitter.com/ESYudkowsky/status/1108277090577600512) That was already a huge political concession to you! That makes you _more_ than even; you should stop being greedy and leave Eliezer alone." +> +> But as [I explained in my reply](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) criticizing why I think that argument is _wrong_, the whole mindset of public-arguments-as-political-favors is _crazy_. **The fact that we're having this backroom email conversation at all (instead of just being correct about the philosophy of language on Twitter) is _corrupt_!** I don't want to strike a deal in a political negotiation; I want _shared maps that reflect the territory_. I thought that's what this "rationalist community" thing was supposed to do? Is that not a thing anymore? If we can't do the shared-maps thing when there's any hint of political context (such that now you _can't_ clarify the categories thing, even as an abstract philosophy issue about bleggs, because someone would construe that as taking a side on whether trans people are Good or Bad), that seems really bad for our collective sanity?! (Where collective sanity is potentially useful for saving the world, but is at least a quality-of-life improver if we're just doomed to die in 15 years no matter what.) +> +> **I really used to look up to you.** In my previous interactions with you, I've been tightly [cognitively constrained](http://www.hpmor.com/chapter/57) by hero-worship. I was already so starstruck that _Eliezer Yudkowsky knows who I am_, that the possibility that _Eliezer Yudkowsky might disapprove of me_, was too terrifying to bear. I really need to get over that, because it's bad for me, and [it's _really_ bad for you](https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why). I remain, +> +> Your heartbroken student, +> [...] + -[TODO: also excerpt out-of-patience followup email?] [TODO: Sep 2020 categories clarification from EY—victory?! https://www.facebook.com/yudkowsky/posts/10158853851009228 @@ -1159,7 +1224,7 @@ But I don't, think that everybody knows. And I'm not, giving up that easily. Not Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176): -> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. +> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. [TODO: first of all, "A Rational Arugment" is very explicit about "not have been as willing to Tweet a truth helping the side" meaning you've crossed the line; second of all, it's if anything more plausible that trans women will matter to AGI, as I pointed out in my email] @@ -1313,11 +1378,32 @@ I don't doubt Yudkowsky could come up with some clever casuistry why, _technical [TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers] -[TODO: body odor anecdote] +On the offhand chance that Eliezer Yudkowsky happens to be reading this—if someone _he_ trusts (MIRI employees?) genuinely thinks it would be good for the lightcone to bring this paragraph to his attention—he should know that if he _wanted_ to win back _some_ of the trust and respect he's lost from me and everyone I can influence—not _all_ of it, but _some_ of it[^some-of-it]—I think it would be really easy. All he would have to do is come clean about the things he's _already_ misled people about. + +[^some-of-it]: Coming clean _after_ someone writes a 70,000 word memoir explaining how dishonest you've been, engenders less trust than coming clean spontenously of your own accord. + +I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's _clearly labeled as such_ would be fine. (There's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive; if you want other perspectives, you'll have to read other authors and think it through for yourself.") + +What's _not_ fine is selective argumentation while claiming "confidence in [your] own ability to independently invent everything important that would be on the other side of the filter and check it [yourself] before speaking" when you _very obviously have done no such thing_. That's _not_ "no commment"! Having _already_ chosen to comment, he can't reasonably expect any self-respecting rationalist to take his "epistemic hero" bluster seriously if he's not going to reply to the obvious objections to have been made with standing and warrant. To wit— + + * Yudkowsky is _on the record_ [claiming that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix", he is "over 50% probability at this point that at least 20% of the ones with penises are actually women". What ... does that mean? What is the _truth condition_ of the word 'woman' in that sentence? This can't be the claim that 20% of males would benefit from a gender transition, and in that sense _become_ "transsexual women"; the claim stated in the post is that members of this group are _already_ "actually women", "female-minds-in-male-bodies". How does Yudkowsky reconcile this claim with the perponderance of male-typical rather than female-typical behavior in this group (_e.g._, in gynephilic sexual orientation, or [in vocational interests](/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/))? On the other hand, if Yudkowsky changed his mind and no longer believes that 20% of Bay Area males of European descent have female brains, can he state that for the public record? _Reply!_ + + * Yudkowsky is _on the record_ [claiming that](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159423713134228) he "do[es] not know what it feels like from the inside to feel like a pronoun is attached to something in your head much more firmly than 'doesn't look like an Oliver' is attached to something in your head." As I explained in "Challenges to Yudkowsky's Pronoun Reform Proposal", [quoting examples from Yudkowsky's published writing in which he treated sex and pronouns as synonymous just as one would expect a native American English speaker born in 1979 to do](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/#look-like-an-oliver), this self-report is not plausible. The claim may not have been a "lie" _in the sense_ of Yudkowsky consciously harboring deliberative intent to deceive at the time he typed that sentence, but it _is_ a "lie" in the sense that the claim is _false_ and Yudkowsky _knows_ it's false (although its falsehood may not have been salient in the moment of typing the sentence). If Yudkowsky expects people to believe that he never lies, perhaps he could correct this accidental lie after it's been pointed out? _Reply!_ + + * In a comment on his February 2021 Facebook post on pronoun reform, Yudkowsky [claims that](https://www.facebook.com/yudkowsky/posts/pfbid0331sBqRLBrDBM2Se5sf94JurGRTCjhbmrYnKcR4zHSSgghFALLKCdsG6aFbVF9dy9l?comment_id=10159421833274228&reply_comment_id=10159421901809228) "in a half-Kolmogorov-Option environment where [...] you can get away with attaching explicit disclaimers like this one, it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment". Some interesting potential counterevidence to this "not community-harmful" claim comes in the form of [a highly-upvoted (110 karma at press time) comment by _Less Wrong_ administrator Oliver Habryka](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY) on the _Less Wrong_ mirror of my rebuttal. Habryka writes: + +> [...] basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to. +> +> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues". +> +> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me. + +Again, that's the administrator of Yudkowsky's _own website_ saying that he's deeply saddened that he now expects Yudkowsky to _make up sophisticated stories for why pretty obviously true things are false_ (!!). Is that ... _not_ a form of harm to the community? If that's not community-harmful in Yudkowsky's view, then what would be example of something that _would_ be? _Reply, motherfucker!_ + +... but I'm not, holding my breath. If Yudkowsky _wants_ to reply—if he _wants_ to try to win back some of the trust and respect he's lost from me—he's totally _welcome_ to. (_I_ don't censor my comment sections of people whom it "looks like it would be unhedonic to spend time interacting with".) -[TODO: if he's reading this, win back respect— reply, motherfucker] -[TODO: I've given up talking to the guy (nearest unblocked strategy sniping in Eliezerfic doesn't count)] +[TODO: I've given up talking to the guy (nearest unblocked strategy sniping in Eliezerfic doesn't count), my last email, giving up on hero-worship I don't want to waste any more of his time. I owe him that much.] [TODO: the Death With Dignity era diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 1e966db..61c5b1d 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,7 +1,17 @@ +on deck— + +_ quote second out of patience email +_ giving up on him, don't want to waste any more time, last email + + With internet available— _ me remarking to "Wilhelm" that I think I met Vanessa at Solstice once, commented on Greg Egan _ debate with Benquo and Jessicata +_ more Yudkowsky Facebook comment screenshots +_ "20% of the ones with penises" someone in the comments saying, "It is a woman's body", and Yudkowsky saying "duly corrected" +_ that neuroscience paper backing the _ Prudentbot § from "Robust Cooperation" paper +_ JDP on warrant and standing _ compile Categories references from the Dolphin War Twitter thread _ tussle with Ruby on "Causal vs. Social Reality" _ when did I ask Leon about getting easier tasks? @@ -13,6 +23,7 @@ _ dath ilan conspiracy references far editing tier— +_ make sure to quote Yudkowsky's LW moderation policy before calling back to it _ tie off Anna's plot arc? _ quote one more "Hill of Meaning" Tweet emphasizing fact/policy distinction _ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) @@ -1423,9 +1434,9 @@ Kathryn Paige Harden, _The Genetic Lottery: Why DNA Matters for Social Equality_ ("Fake Optimization Critiera") -John Snygg, _A New Approach to Differential Geometry Using Clifford's Geometric Algebra_ recounts the Arabic mathematician al-Biruni (973–1048). +John Snygg, _A New Approach to Differential Geometry Using Clifford's Geometric Algebra_ (§4.7.3) recounts the Arabic mathematician al-Biruni (973–1048). -> More is known about al-Briruni than most Islamic mathematicians because he included bits of autobiographical writings in some of his academic publications. In one of these, _Shadows_, he relates an encounter with a hard-line orthodox cleric. THe cleric admonished al-Biruni because he had used an astronomical instrument with Byzantine months engraved on it to determine the time of prayers. Al-Briuni replied: +> More is known about al-Briruni than most Islamic mathematicians because he included bits of autobiographical writings in some of his academic publications. In one of these, _Shadows_, he relates an encounter with a hard-line orthodox cleric. The cleric admonished al-Biruni because he had used an astronomical instrument with Byzantine months engraved on it to determine the time of prayers. Al-Briuni replied: >> "The Byzantines also eat food. Then do not imitate them in this!" (Reversed Stupidity Is Not Intelligence)