From: M. Taylor Saotome-Westlake Date: Sat, 3 Sep 2022 01:12:38 +0000 (-0700) Subject: memoir: the making of "... Boundaries?" behind the scenes X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=037e02623b9ec6e5e09668eb3a249d5b8995fd34;p=Ultimately_Untrue_Thought.git memoir: the making of "... Boundaries?" behind the scenes --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index c17e398..f1d4512 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -408,7 +408,7 @@ But ... if there's some _other_ reason you suspect there might be multiple speci I asked the posse if this analysis was worth sending to Yudkowsky. Michael said it wasn't worth the digression. He asked if I was comfortable generalizing from Scott's behavior, and what others had said about fear of speaking openly, to assuming that something similar was going on with Eliezer? If so, then now that we had common knowledge, we needed to confront the actual crisis, which was that dread was tearing apart old friendships and causing fanatics to betray everything that they ever stood for while its existence was still being denied. -Another thing that happened that week was that former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). Significantly for political purposes, Jessica is trans. We didn't have to agree on all gender issues for her to see the epistemology problem with "... Not Man for the Categories". (On the seventeenth, when I lamented the state of a world that incentivized us to be political enemies, her response was, "Well, we could talk about it first.") Michael said that me and Jess together had more moral authority than either of us alone. +Another thing that happened that week was that former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). Significantly for political purposes, Jessica is trans. We didn't have to agree up front on all gender issues for her to see the epistemology problem with "... Not Man for the Categories" and to say that maintaining a narcissistic fantasy by controlling category boundaries wasn't what _she_ wanted, as a trans person. (On the seventeenth, when I lamented the state of a world that incentivized us to be political enemies, her response was, "Well, we could talk about it first.") Michael said that me and Jessica together had more moral authority than either of us alone. As it happened, I ran into Scott on the train that Friday, the twenty-second. He said that he wasn't sure why the oft-repeated moral of "A Human's Guide to Words" had been "You can't define a word any way you want" rather than "You _can_ define a word any way you want, but then you have to deal with the consequences." @@ -420,11 +420,13 @@ On Discord in January, Kelsey Piper had told me that everyone else experienced t I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but ... Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that was kind of bad for our collective sanity, wasn't it? -As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their general ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! +As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their _general_ ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice! -I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Yudkowsky would _have_ to understand even if Scott didn't. But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was: +I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it.) - * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote and commentary on the orc story). +But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was: + + * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote with Kelsey and commentary on the orc story). * Mentally write-off Scott, Eliezer, and the so-called "rationalist" community as a loss so that I wouldn't be in horrible emotional pain from cognitive dissonance all the time. * Write up the mathy version of the categories argument for _Less Wrong_ (which I thought might take a few months—I had a dayjob, and write slowly, and might need to learn some new math, which I'm also slow at). * _Then_ email the link to Scott and Eliezer asking for a signal-boost and/or court ruling. @@ -437,7 +439,7 @@ Ben had previously written (in the context of the effective altruism movement) a He was obviously correct that this was a distortionary force relative to what ideal Bayesian agents would do, but I was worried that when we're talking about criticism of _people_ rather than ideas, the removal of the distortionary force would just result in an ugly war (and not more truth). Criticism of institutions and social systems _should_ be filed under "ideas" rather than "people", but the smaller-scale you get, the harder this distinction is to maintain: criticizing, say, "the Center for Effective Altruism", somehow feels more like criticizing Will MacAskill personally than criticizing "the United States" does, even though neither CEA nor the U.S. is a person. -This is why I felt like I couldn't give up faith that [honest discourse _eventually_ wins](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/). Under my current strategy and consensus social norms, I could criticize Scott or Kelsey or Ozy's _ideas_ without my social life dissolving into a war of all against all, whereas if I were to give in to the temptation to flip a table and say, "Okay, now I _know_ you guys are just fucking with me," then I didn't see how that led anywhere good, even if they really _are_ just fucking with me. +This is why I felt like I couldn't give up faith that [honest discourse _eventually_ wins](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/). Under my current strategy and consensus social norms, I could criticize Scott or Kelsey or Ozy's _ideas_ without my social life dissolving into a war of all against all, whereas if I were to give in to the temptation to flip a table and say, "Okay, now I _know_ you guys are just fucking with me," then I didn't see how that led anywhere good, even if they really _were_ just fucking with me. Jessica explained what she saw as the problem with this. What Ben was proposing was _creating clarity about behavioral patterns_. I was saying that I was afraid that creating such clarity is an attack on someone. But if so, then my blog was an attack on trans people. What was going on here? @@ -445,48 +447,33 @@ Socially, creating clarity about behavioral patterns _is_ construed as an attack But _selectively_ creating clarity down but not up power gradients just reinforces existing power relations—just like how selectively criticizing arguments with politically unfavorable conclusions only reinforces your current political beliefs. I shouldn't be able to get away with claiming that [calling non-exclusively-androphilic trans women delusional perverts](/2017/Mar/smart/) is okay on the grounds that that which can be destroyed by the truth should be, but that calling out Alexander and Yudkowsky would be unjustified on the grounds of starting a war or whatever. If I was being cowardly or otherwise unprincipled, I should own that instead of generating spurious justifications. Jessica was on board with a project to tear down narcissistic fantasies in general, but not on board with a project that starts by tearing down trans people's narcissistic fantasies, but then emits spurious excuses for not following that effort where it leads. -Somewhat apologetically, I replied that the distinction between truthfully, publicly criticizing group identities and _named individuals_ still seemed very significant to me? I would be way more comfortable writing [a scathing blog post about the behavior of "rationalists"](/2017/Jan/im-sick-of-being-lied-to/), than about a specific person not adhering to good discourse norms in an email conversation that they had good reason to expect to be private. I thought I was consistent about this: contrast my writing to the way that some anti-trans writers name-and-shame particular individuals. (The closest I had come was [mentioning Danielle Muscato as someone who doesn't pass](/2018/Dec/untitled-metablogging-26-december-2018/#photo-of-danielle-muscato)—and even there, I admitted it was "unclassy" and done in desperation of other ways to make the point having failed.) I had to acknowledge that criticism of non-exclusively-androphilic trans women in general _implied_ criticism of Jessica, and criticism of "rationalists" in general _implied_ criticism of Yudkowsky and Alexander and me, but the extra inferential step and "fog of probability" seemed useful for making the speech act less of an attack? Was I wrong? +Somewhat apologetically, I replied that the distinction between truthfully, publicly criticizing group identities and _named individuals_ still seemed very significant to me?—and that avoiding leaking info from private conversations seemed like an important obligation, too. I would be way more comfortable writing [a scathing blog post about the behavior of "rationalists"](/2017/Jan/im-sick-of-being-lied-to/), than about a specific person not adhering to good discourse norms in an email conversation that they had good reason to expect to be private. I thought I was consistent about this: contrast my writing to the way that some anti-trans writers name-and-shame particular individuals. (The closest I had come was [mentioning Danielle Muscato as someone who doesn't pass](/2018/Dec/untitled-metablogging-26-december-2018/#photo-of-danielle-muscato)—and even there, I admitted it was "unclassy" and done in desperation of other ways to make the point having failed.) I had to acknowledge that criticism of non-exclusively-androphilic trans women in general _implied_ criticism of Jessica, and criticism of "rationalists" in general _implied_ criticism of Yudkowsky and Alexander and me, but the extra inferential step and "fog of probability" seemed useful for making the speech act less of an attack? Was I wrong? -Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.) +Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that as a more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.) ----- Polishing the advanced categories argument from earlier email drafts into a solid _Less Wrong_ post didn't take that long: by 6 April, I had an almost-complete draft of the new post, ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), that I was pretty happy with. -The title (note: "boundaries", plural) was a play off of ["Where to the Draw the Boundary?"](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) (note: "boundary", singular), a post from Yudkowsky's original Sequence on the ways in which words can be wrong. - -Notably, in "... Boundary?", Yudkowsky asserts (without argument, as something that all educated people already know) that dolphins don't form a natural category with fish ("Once upon a time it was thought that the word 'fish' included dolphins [...] you could stop playing nitwit games and admit that dolphins don't belong on the fish list"). But Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) directly contradicts this, asserting that there's nothing wrong with with biblical word _dagim_ encompassing both fish and cetaceans (dolphins and whales). So who's right, Yudkowsky (2008) or Alexander (2014)? Is there a problem with dolphins being "fish", or not? - -In "... Boundaries?", I unify the two positions and explain how both Yudkowsky and Alexander have a point. - - - +The title (note: "boundaries", plural) was a play off of ["Where to the Draw the Boundary?"](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) (note: "boundary", singular), a post from Yudkowsky's original Sequence on the ways in which words can be wrong. In "... Boundary?", Yudkowsky asserts (without argument, as something that all educated people already know) that dolphins don't form a natural category with fish ("Once upon a time it was thought that the word 'fish' included dolphins [...] you could stop playing nitwit games and admit that dolphins don't belong on the fish list"). But Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) directly contradicts this, asserting that there's nothing wrong with with biblical Hebrew word _dagim_ encompassing both fish and cetaceans (dolphins and whales). So who's right, Yudkowsky (2008) or Alexander (2014)? Is there a problem with dolphins being "fish", or not? +In "... Boundaries?", I unify the two positions and explain how both Yudkowsky and Alexander have a point: in high-dimensional configuration space, there's a cluster of finned water-dwelling animals in the subspace of the dimensions along which finned water-dwelling animals are similar to each other, and a cluster of mammals in the subspace of the dimensions along which mammals are similar to each other, and dolphins belong to _both_ of them. _Which_ subspace you pay attention to can legitimately depend on your values: if you don't care about predicting or controlling some particular variable, you have no reason to look for clusters along that dimension. +But _given_ a subspace of interest, the _technical_ criterion of drawing category boundaries around [regions of high density in configuration space](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) still applies. There is Law governing which uses of communication signals transmit which information, and the Law can't be brushed off with, "whatever, it's a pragmatic choice, just be nice." I demonstrate the Law with a couple of simple mathematical examples: if you redefine a codeword that originally pointed to one cluster, to also include another, that changes the quantitative predictions you make about an unobserved coordinate given the codeword; if an employer starts giving the title "Vice President" to line workers, that decreases the mutual information between the job title and properties of the job. +(Jessica and Ben's [discussion of the job title example in relation to the _Wikipedia_ summary of Jean Baudrillard's _Simulacra and Simulation_ ended up getting published separately](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/), and ended up taking on a life of its own in [future posts](http://benjaminrosshoffman.com/simulacra-subjectivity/), [including](https://thezvi.wordpress.com/2020/06/15/simulacra-and-covid-19/) by [other authors](https://thezvi.wordpress.com/2020/08/03/unifying-the-simulacra-definitions/).) -["...Boundaries?" quotes from SA and EY— -> an alternative categorization system is not an error, and borders are not objectively true or false. +Sarah asked if the math wasn't a bit overkill: was it really necessary to make the point that good definitions should be about classifying the world, rather than about what's pleasant or politically expedient to say? I thought the math was _really important_ as an appeal to principle—and [as intimidation](https://slatestarcodex.com/2014/08/10/getting-eulered/). (As it is written, [_the tenth virtue is precision!_](http://yudkowsky.net/rational/virtues/) Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.) -> You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning] +"... Boundaries?" explains all this in the form of discourse with a hypothetical interlocutor arguing for the I-can-define-a-word-any-way-I-want position. In the hypothetical interlocutor's parts, I wove in verbatim quotes (without attribution) from Alexander ("an alternative categorization system is not an error, and borders are not objectively true or false") and Yudkowsky ("You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", "Using language in a way _you_ dislike is not lying. The propositions you claim false [...] is not what the [...] is meant to convey, and this is known to everyone involved; it is not a secret"), and Bensinger ("doesn't unambiguously refer to the thing you're trying to point at"). -[from Rob] -> doesn't unambiguously refer to the thing you're trying to point at +My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic which reputable people had strong incentives not to touch with a ten-foot pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier and [leave a line of retreat](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) for Yudkowsky to correct the philosophy of language error. And then if someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) -> Using language in a way _you_ dislike is not lying. The propositions you claim false—about new job tasks, increased pay and authority—is not what the title is meant to convey, and this is known to everyone involved; it is not a secret. +I could see a case that it was unfair of me to include subtext and then expect people to engage with the text, but if we weren't going to get into full-on gender-politics on _Less Wrong_ (which seemed like a bad idea), but gender politics _was_ motivating an epistemology error, I wasn't sure what else I'm supposed to do! I was pretty constrained here! -[earlier: cover Scott's claim that he can make just as accurate predictions only makes sense as being about verbal behavior, not cognitive algorithms, the post explains this] +(I did regret having accidentally "poisoned the well" the previous month by impulsively sharing last year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently deniably allegorical. It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the trainwreck in the comments on the "Blegg Mode" linkpost.) - - - -[TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural) - * Wasn't the math overkill? - * math is important for appeal to principle—and as intimidation https://slatestarcodex.com/2014/08/10/getting-eulered/ - * four simulacra levels got kicked off here - * I could see that I'm including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here! - * I had already poisoned the well with "Blegg Mode" the other month, bad decision - ] +----- [TODO: Jessica on corruption— > I am reminded of someone who I talked with about Zack writing to you and Scott to request that you clarify the category boundary thing. This person had an emotional reaction described as a sense that "Zack should have known that wouldn't work" (because of the politics involved, not because Zack wasn't right). Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused. @@ -499,7 +486,9 @@ One reason someone might be reluctant to correct mistakes when pointed out, is t I wondered if maybe, in Scott or Eliezer's mental universe, I was a blameworthy (or pitiably mentally ill) nitpicker for flipping out over a blog post from 2014 (!) and some Tweets (!!) from November. Like, really? I, too, had probably said things that were wrong _five years ago_. -But, well, I thought I had made a pretty convincing that a lot of people are making a correctable and important rationality mistake, such that the cost of a correction (about the philosophy of language specifically, not any possible implications for gender politics) would actually be justified here. If someone had put _this much_ effort into pointing out an error I had made four months or five years ago and making careful arguments for why it was important to get the right answer, I think I _would_ put some serious thought into it rather than brushing them off. +But, well, I thought I had made a pretty convincing that a lot of people are making a correctable and important rationality mistake, such that the cost of a correction (about the philosophy of language specifically, not any possible implications for gender politics) would actually be justified here. If someone had put _this much_ effort into pointing out an error _I_ had made four months or five years ago and making careful arguments for why it was important to get the right answer, I think I _would_ put some serious thought into it. + + ] diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 81e0aac..a145375 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,14 +1,18 @@ with internet available— +_ link simulacrum posts: Zvi (he has a category), Elizabeth, at least one more from Ben _ Discord logs before Austin retreat _ screenshot Rob's Facebook comment which I link _ 13th century word meanings _ compile Categories references from the Dolphin War Twitter thread +_ weirdly hostile comments on "... Boundaries?" +_ report comment count "Blegg Mode" trainwreck far editing tier— _ clarify why Michael thought Scott was "gaslighting" me, include "beeseech bowels of Christ" _ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ address the "maybe it's good to be called names" point from "Hill" thread +_ explain "court ruling" earlier _ 2019 Discord discourse with Alicorner _ edit discussion of "anti-trans" side given that I later emphasize that "sides" shouldn't be a thing _ the right way to explain how I'm respecting Yudkowsky's privacy @@ -29,8 +33,7 @@ _ when to use first _vs. last names _ explain why I'm not being charitable in 2018 thread analysis, that at the time, I thought it had to be a mistake _ January 2019 meeting with Ziz and Gwen _ better summary of Littman - - +_ explain Rob people to consult before publishing, for feedback or right of objection—