From 97886411968a7bf448c4271ad777cbada2af9eff Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 17 Nov 2023 18:47:16 -0800 Subject: [PATCH] memoir: apply pt. 3 pro edits --- .../if-clarity-seems-like-death-to-them.md | 160 +++++++----------- 1 file changed, 57 insertions(+), 103 deletions(-) diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index c65483d..7824346 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -301,21 +301,21 @@ This is arguably one of my more religious traits. Michael and Kelsey are domain ------- -I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained, bound by internal silencing-chains. So instead, I mostly turned to a combination of writing [bitter](https://www.greaterwrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare/comment/Nhv9KPte7d5jbtLBv) and [insulting](https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7) [comments](https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE) whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging! +I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained. So instead, I mostly turned to a combination of writing [bitter](https://www.greaterwrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare/comment/Nhv9KPte7d5jbtLBv) and [insulting](https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7) [comments](https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE) whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging! -In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) +In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passing mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occurred to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) -In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to 10-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.) +In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to ten-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.) -In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. A function that faithfully passes observations it sees as input to another function, lets the second function constructing a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse probability distribution. +In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. If a function faithfully passes its observations as input to another function, the second function can construct a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse probability distribution. Also in October 2019, in ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist), I replied to Scott Alexander's ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", _&c._ to describe someone being (purportedly) motivatedly wrong, but not necessarily consciously lying. -I was _furious_ when "Against Lie Inflation" came out. (Furious at what I perceived as hypocrisy, not because I particularly cared about defending Jessica's usage.) Oh, so _now_ Scott agreed that making language less useful is a problem?! But on further consideration, I realized Alexander actually was being consistent in admitting appeals-to-consequences as legitimate. In objecting to the expanded definition of "lying", Alexander was counting "everyone is angrier" (because of more frequent lying-accusations) as a cost. Whereas on my philosophy, that wasn't a legitimate cost. (If everyone _is_ lying, maybe people _should_ be angry!) +I was _furious_ when "Against Lie Inflation" came out. (Furious at what I perceived as hypocrisy, not because I particularly cared about defending Jessica's usage.) Oh, so _now_ Scott agreed that making language less useful is a problem?! But on further consideration, I realized he was actually being consistent in admitting appeals to consequences as legitimate. In objecting to the expanded definition of "lying", Alexander was counting "everyone is angrier" (because of more frequent accusations of lying) as a cost. In my philosophy, that wasn't a legitimate cost. (If everyone _is_ lying, maybe people _should_ be angry!) ----- -While visiting "Arcadia" on 7 August 2019, Mike and "Meredith"'s son (age 2¾ years) asked me, "Why are you a boy?" +While visiting "Arcadia" on 7 August 2019, "Meredith" and Mike's son (age 2¾ years) asked me, "Why are you a boy?" After a long pause, I said, "Yes," as if I had misheard the question as "Are you a boy?" I think it was a motivated mishearing: it was only after I answered that I consciously realized that's not what the kid asked. @@ -329,37 +329,37 @@ I continued to note signs of contemporary Yudkowsky not being the same author wh [I argued that](https://twitter.com/zackmdavis/status/1164259164819845120) the people who smear him as a right-wing Bad Guy do so in order to extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small [Danegeld](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/). -When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is actually false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension), and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By [linking to](https://twitter.com/zackmdavis/status/1164259289575251968) ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors? +When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension) and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By [linking to](https://twitter.com/zackmdavis/status/1164259289575251968) ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors? The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding mine): > Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, "If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?" **Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won't be very convincing.** Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly. -I perceived a pattern where people who are in trouble with the orthodoxy feel an incentive to buy their own safety by denouncing other heretics: not just disagreeing with the other heretics because those other heresies are in fact mistaken, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld. +I perceived a pattern where people who are in trouble with the orthodoxy buy their own safety by denouncing other heretics: not just disagreeing with the other heretics because they are mistaken, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld. -Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's [impossible to have a unified rationalist community](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy of categorization right in full generality, even though his writings revealed an implicit understanding of the correct way,[^implicit-understanding] and he and I had a common enemy in the social-justice egregore. He couldn't afford to. He'd already spent his Overton budget [on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/). +Suppose there are five true heresies, but anyone who's on the record as believing more than one gets burned as a witch. Then it's [impossible to have a unified rationalist community](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy of categorization right in full generality, even though his writings revealed an implicit understanding of the correct way,[^implicit-understanding] and he and I had a common enemy in the social-justice egregore. He couldn't afford to. He'd already spent his Overton budget [on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/). [^implicit-understanding]: As I had [explained to him earlier](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#noncentral-fallacy), Alexander's famous [post on the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) condemned the same shenanigans he praised in the context of gender identity: Alexander's examples of the noncentral fallacy had largely been arguable edge-cases of a negative-valence category being inappropriately framed as typical (abortion is murder, taxation is theft), but "trans women are women" was the same thing, but with a positive-valence category. In ["Does the Glasgow Coma Scale exist? Do comas?"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) (published just three months before "... Not Man for the Categories"), Alexander defends the usefulness of "comas" and "intelligence" in terms of their predictive usefulness. (The post uses the terms "predict", "prediction", "predictive power", _&c._ 16 times.) He doesn't say that the Glasgow Coma Scale is justified because it makes people happy for comas to be defined that way, because that would be absurd. -Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the falling back to the one-heresy-per-thinker equilibrium.) +Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the one-heresy-per-thinker equilibrium.) -Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors [(most notably Mencius Moldbug)](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#unqualified-reservations) and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#less-precise-is-more-violent) (when I was talking about criticizing "rationalists"). +Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors [(most notably Mencius Moldbug)](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#unqualified-reservations) and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#less-precise-is-more-violent) when I was talking about criticizing "rationalists". Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.lesswrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s?commentId=TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.) -I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this-and-such author." I wasn't accusing him of being insincere. +I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this author." I wasn't accusing him of being insincere. -What I did think was that the need to keep up appearances of not-being-a-right-wing-Bad-Guy was a serious distortion on people's beliefs, because there are at least a few questions of fact where believing the correct answer can, in today's political environment, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to _notice that this is a rationality problem_, and to _not actively make the problem worse_, and I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that the extent to which I think I've learned valuable things from Moldbug, made me less welcome in Yudkowsky's fiefdom. +What I did think was that the need to keep up appearances of not being a right wing Bad Guy was a serious distortion of people's beliefs, because there are at least a few questions of fact where believing the correct answer can, in today's political environment, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to notice that this is a rationality problem and to not actively make the problem worse. I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that if I thought I'd learned valuable things from Moldbug, that made me less welcome in Yudkowsky's fiefdom. -Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" _as stated_, but "I do not welcome support from those quarters" still seemed like a pointlessly partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's _obviously not my fault_." +Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" _as stated_, but "I do not welcome support from those quarters" still seemed like a pointlessly partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's obviously not my fault." Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to also to acknowledge, _e.g._, racial IQ differences? -I agreed that it would be helpful, but realistically, I didn't see why Yudkowsky should want to poke the race-differences hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, you either become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.) +I agreed that it would be helpful, but realistically, I didn't see why Yudkowsky should want to poke the race-differences hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, either you become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.) -It was as if there was a "Say Everything" attractor, and a "Say Nothing" attractor, and my incentives were pushing me towards the "Say Everything" attractor—but that was only because I had [Something to Protect](/2019/Jul/the-source-of-our-power/) in the forbidden zone and I was a decent programmer (who could therefore expect to be employable somewhere, just as [James Damore eventually found another job](https://twitter.com/JamesADamore/status/1034623633174478849)). Anyone in less extreme circumstances would find themselves being pushed to the "Say Nothing" attractor. +It was as if there was a "Say Everything" attractor and a "Say Nothing" attractor, and my incentives were pushing me towards the "Say Everything" attractor—but that was only because I had [Something to Protect](/2019/Jul/the-source-of-our-power/) in the forbidden zone and I was a decent programmer (who could therefore expect to be employable somewhere, just as [James Damore eventually found another job](https://twitter.com/JamesADamore/status/1034623633174478849)). Anyone in less extreme circumstances would find themselves pushed toward the "Say Nothing" attractor. It was instructive to compare Yudkowsky's new disavowal of neoreaction with one from 2013, in response to a _TechCrunch_ article citing former MIRI employee Michael Anissimov's neoreactionary blog _More Right_:[^linkrot] @@ -369,17 +369,17 @@ It was instructive to compare Yudkowsky's new disavowal of neoreaction with one > > Also to be clear: I try not to dismiss ideas out of hand due to fear of public unpopularity. However I found Scott Alexander's takedown of neoreaction convincing and thus I shrugged and didn't bother to investigate further. -My "negotiating with terrorists" criticism did not apply to the 2013 statement. "More Right" _was_ brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, and the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing [the McCarthyist persecution](https://www.unqualified-reservations.org/2013/09/technology-communism-and-brown-scare/). +My criticism regarding "negotiating with terrorists" did not apply to the 2013 disavowal. _More Right_ was brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, and the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing [McCarthyist persecution](https://www.unqualified-reservations.org/2013/09/technology-communism-and-brown-scare/). The question was, what had specifically happened in the last six years to shift Yudkowsky's opinion on neoreaction from (paraphrased) "Scott says it's wrong, so I stopped reading" to (verbatim) "actively hostile"? Note especially the inversion from (both paraphrased) "I don't support neoreaction" (fine, of course) to "I don't even want _them_ supporting _me_" ([which was bizarre](https://twitter.com/zackmdavis/status/1164329446314135552); humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips). -Did Yudkowsky get new information about neoreaction's hidden Badness parameter sometime between 2013 and 2019, or did moral coercion on him from the left intensify (because Trump and [because Berkeley](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/))? My bet was on the latter. +Did Yudkowsky get new information about neoreaction's hidden Badness parameter sometime between 2013 and 2019, or did moral coercion from the left intensify (because Trump and [because Berkeley](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/))? My bet was on the latter. ------ -However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November 2019, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the non-Culture-War context of him [trying to wirehead his fiction subreddit by suppressing criticism-in-general](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/). +However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November 2019, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the context of him [trying to wirehead his fiction subreddit by suppressing criticism-in-general](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/). -That's _my_ characterization, of course: the post itself talks about "reducing negativity". [In a followup comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine): +That's my characterization, of course: the post itself talks about "reducing negativity". [In a followup comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine): > On discussion threads for a work's particular chapter, people may debate the well-executedness of some particular feature of that work's particular chapter. Comments saying that nobody should enjoy this whole work are still verboten. **Replies here should still follow the etiquette of saying "Mileage varied: I thought character X seemed stupid to me" rather than saying "No, character X was actually quite stupid."** @@ -387,11 +387,11 @@ But ... "I thought X seemed Y to me"[^pleonasm] and "X is Y" do not mean the sam [^pleonasm]: The pleonasm here ("to me" being redundant with "I thought") is especially galling coming from someone who's usually a good writer! -It might seem like a little thing of no significance—requiring ["I" statements](https://en.wikipedia.org/wiki/I-message) is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click. If everyone is forced to only make claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](https://www.overcomingbias.com/p/disagreement-ishtml)), that's great for reducing social conflict, but it's not great for the kind of collective information processing that accomplishes cognitive work,[^i-statements] like good literary criticism. A rationalist space needs to be able to talk about the territory. +It might seem like a little thing of no significance—requiring ["I" statements](https://en.wikipedia.org/wiki/I-message) is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click. If everyone is forced to only make claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](https://www.overcomingbias.com/p/disagreement-ishtml)), that's great for reducing social conflict, but not for the kind of collective information processing that accomplishes cognitive work,[^i-statements] like good literary criticism. A rationalist space needs to be able to talk about the territory. [^i-statements]: At best, "I" statements make sense in a context where everyone's speech is considered part of the "official record". Wrapping controversial claims in "I think" removes the need for opponents to immediately object for fear that the claim will be accepted onto the shared map. -To be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to say, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did. +To be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" as one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to say, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did. "Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is _M_, the _i_-th commenter's estimate of mistakenness has an error term of _Ei_, and commenters leave a negative comment when their estimate _M_ + _Ei_ is greater than their threshold for commenting _Ti_, then the comments that get posted will have been selected for erroneous criticism (high _Ei_) and commenter chattiness (low _Ti_). @@ -407,7 +407,7 @@ Yudkowsky claims that criticism should be given in private because then the targ [^communism-analogy]: That is, there's an analogy between economically valuable labor, and intellectually productive criticism: if you accept the necessity of paying workers money in order to get good labor out of them, you should understand the necessity of awarding commenters status in order to get good criticism out of them. -There's a striking contrast between the Yudkowsky of 2019 who wrote the "Reducing Negativity" post, and an earlier Yudkowsky (from even before the Sequences) who maintained [a page on Crocker's rules](http://sl4.org/crocker.html): if you declare that you operate under Crocker's rules, you're consenting to other people optimizing their speech for conveying information rather than being nice to you. If someone calls you an idiot, that's not an "insult"; they're just informing you about the fact that you're an idiot, and you should probably thank them for the tip. (If you _were_ an idiot, wouldn't you be better off knowing rather than not-knowing?) +There's a striking contrast between the Yudkowsky of 2019 who wrote the "Reducing Negativity" post, and an earlier Yudkowsky (from even before the Sequences) who maintained [a page on Crocker's rules](http://sl4.org/crocker.html): if you declare that you operate under Crocker's rules, you're consenting to other people optimizing their speech for conveying information rather than being nice to you. If someone calls you an idiot, that's not an "insult"; they're just informing you about the fact that you're an idiot, and you should probably thank them for the tip. (If you _were_ an idiot, wouldn't you be better off knowing that?) It's of course important to stress that Crocker's rules are opt in on the part of the receiver; it's not a license to unilaterally be rude to other people. Adopting Crocker's rules as a community-level norm on an open web forum does not seem like it would end well. @@ -417,7 +417,7 @@ Appreciation of this obvious normative ideal seems strikingly absent from Yudkow The "Reducing Negativity" post also warns against the failure mode of attempted "author telepathy": attributing bad motives to authors and treating those attributions as fact without accounting for uncertainty or distinguishing observations from inferences. I should be explicit, then: when I say negative things about Yudkowsky's state of mind, like it's "as if he's given up on the idea that reasoning in public is useful or possible", that's a probabilistic inference, not a certain observation. -But I think making probabilistic inferences is ... fine? The sentence "Credibly helpful unsolicited criticism should be delivered in private" sure does look to me like text that's likely to have been generated by a state of mind that doesn't believe that reasoning in public is useful or possible.[^criticism-inference] I think that someone who did believe in public reason would have noticed that criticism has information content whose public benefits might outweigh its potential to harm an author's reputation or feelings.[^unhedonic] If you think I'm getting this inference wrong, feel free to let me _and other readers_ know why in the comments. +But I think making probabilistic inferences is ... fine? The sentence "Credibly helpful unsolicited criticism should be delivered in private" sure does look to me like text generated by a state of mind that doesn't believe that reasoning in public is useful or possible.[^criticism-inference] I think that someone who did believe in public reason would have noticed that criticism has information content whose public benefits might outweigh its potential to harm an author's reputation or feelings.[^unhedonic] If you think I'm getting this inference wrong, feel free to let me _and other readers_ know why in the comments. [^criticism-inference]: More formally, I'm claiming that the [likelihood ratio](https://arbital.com/p/likelihood_ratio/) P(wrote that sentence|doesn't believe in public reason)/P(wrote that sentence|does believe in public reason) is greater than one. @@ -425,9 +425,9 @@ But I think making probabilistic inferences is ... fine? The sentence "Credibly ----- -On 3 November 2019, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifier for an application depends on the costs and benefits. As a classic example, it's important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator. +On 3 November 2019, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifier for an application depends on the costs and benefits. As a classic example, prey animals need to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator. -I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for what variables you paid attention to, to be motivated by consequences. But given the subspace that's relevant to your interests, you want to run an "epistemically legitimate" clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences. +I had thought of the "false positives are better than false negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for consequences to motivate what variables you paid attention to. But given the subspace that's relevant to your interests, you want to run an "epistemically legitimate" clustering algorithm on the data you see there, which depends on the data, not your values. Ideal probabilistic beliefs shouldn't depend on consequences. Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is because probabilistic reasoning is broadly useful: epistemology can be derived from instrumental concerns. He agreed that severe wireheading issues potentially arise if you allow consequentialist concerns to affect your epistemics. @@ -437,19 +437,19 @@ I didn't immediately have an answer for Abram, but I was grateful for the engage ------ -Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been that it should have been possible to tell the story of the Category War while Glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. +Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been to tell the story of the Category War while Glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water). -The reason it _should_ have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully tell the true story about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." +The reason it _should_ have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to tell the true story about why I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." -So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was "the nuclear option" in a way that I couldn't credibly cancel with "But I'm just telling a true story about a thing that was important to me that actually happened" disclaimers. If you're slowly-but-surely gaining territory in a conventional war, suddenly escalating to nukes seems pointlessly destructive. This metaphor was horribly non-normative ([arguing is not a punishment](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html); carefully telling a true story _about_ an argument is not a nuke), but I didn't know how to make it stably go away. +So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was the nuclear option. If you're slowly but surely gaining territory in a conventional war, suddenly escalating to nukes seems pointlessly destructive. This metaphor was horribly non-normative ([arguing is not a punishment](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html); carefully telling a true story _about_ an argument is not a nuke), but I didn't know how to make it stably go away. -A more motivationally-stable compromise would be to split off whatever generalizable insights that would have been part of the story into their own posts that didn't make it personal. ["Heads I Win, Tails?—Never Heard of Her"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff I was worried about without making it personal, even if, secretly ("secretly"), it was personal. +A more motivationally-stable compromise would be to split off whatever generalizable insights that would have been part of the story into their own posts. ["Heads I Win, Tails?—Never Heard of Her"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff without making it personal, even if, secretly ("secretly"), it was personal. Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abuser. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would reduce the perceived complexity of the problem. I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a lame excuse. -This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's reply that this usage was [motte-and-baileying](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) between different senses of _fraud_. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge. +This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's perspective that this usage of "fraud" was [motte-and-baileying](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) between different senses of the word. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge. ------- @@ -457,37 +457,19 @@ On 12 and 13 November 2019, Ziz [published](https://archive.ph/GQOeg) [several]( I was tempted to email links to the blog posts to the Santa Rosa _Press-Democrat_ reporter covering the incident (as part of my information-sharing-is-good virtue ethics), but decided to refrain because I predicted that Anna would prefer I didn't. -The main relevance of this incident to my Whole Dumb Story is that Ziz's memoir–manifesto posts included [a 5500 word section about me](https://archive.ph/jChxP#selection-1325.0-1325.4). Ziz portrays me as a slave to social reality, throwing trans women under the bus to appease the forces of cissexism. (I don't think that's what's going on with me, but I can see why the theory was appealing.) I was flattered that someone had so much to say about me, even if I was being portrayed negatively. +The main relevance of this incident to my Whole Dumb Story is that Ziz's memoir–manifesto posts included [a 5500 word section about me](https://archive.ph/jChxP#selection-1325.0-1325.4). Ziz portrays me as a slave to social reality, throwing trans women under the bus to appease the forces of cissexism. I don't think that's what's going on with me, but I can see why the theory was appealing. -------- -I had an interesting interaction with [Somni](https://somnilogical.tumblr.com/), one of the "Meeker Four"—presumably out on bail at this time?—on Discord on 12 December 2019. +On 12 December 2019 I had an interesting interaction with [Somni](https://somnilogical.tumblr.com/), one of the "Meeker Four"—presumably out on bail at this time?—on Discord. -I told her, from a certain perspective, it's surprising that she spent so much time complaining about CfAR, Anna Salamon, Kelsey Piper, _&c._, but _I_ seemed to get along fine with her—because naïvely, one would think that my views were so much worse. Was I getting a pity pass because she thought false consciousness was causing me to act against my own transfem class interests? Or what? +I told her it was surprising that she spent so much time complaining about CfAR, Anna Salamon, Kelsey Piper, _&c._, but _I_ seemed to get along fine with her—because naïvely, one would think that my views were so much worse. Was I getting a pity pass because she thought false consciousness was causing me to act against my own transfem class interests? Or what? -In order to be absolutely clear about my terrible views, I said that I was privately modeling a lot of transmisogyny complaints as something like—a certain neurotype-cluster of non-dominant male is latching onto locally-ascendant social-justice ideology in which claims to victimhood can be leveraged into claims to power. Traditionally, men are moral agents, but not patients; women are moral patients, but not agents. If weird non-dominant men aren't respected if identified as such (because low-ranking males aren't valuable allies, and don't have intrinsic moral patiency of women), but _can_ get victimhood/moral-patiency points for identifying as oppressed transfems, that creates an incentive gradient for them to do so, and no one was allowed to notice this except me, because everybody [who's anybody](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) prefers to stay on the good side of social-justice ideology unless they have Something to Protect that requires defying it. +In order to be absolutely clear about my terrible views, I said that I was privately modeling a lot of transmisogyny complaints as something like—a certain neurotype-cluster of non-dominant male is latching onto locally ascendant social-justice ideology in which claims to victimhood can be leveraged into claims to power. Traditionally, men are moral agents, but not patients; women are moral patients, but not agents. If weird non-dominant men aren't respected if identified as such (because low-ranking males aren't valuable allies, and don't have the intrinsic moral patiency of women), but _can_ get victimhood/moral-patiency points for identifying as oppressed transfems, that creates an incentive gradient for them to do so. No one was allowed to notice this except me, because everybody [who's anybody](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) prefers to stay on the good side of social-justice ideology unless they have Something to Protect that requires defying it. -Somni said that it was because I was being victimized by the same forces of gaslighting and that I wasn't lying about my agenda. Maybe she _should_ be complaining about me?—but I seemed to be following a somewhat earnest epistemic process, whereas Kelsey, Scott, and Anna were not. If I were to start going, "Here's my rationality org; rule #1: no transfems (except me); rule #2, no telling people about rule #1", then she would talk about it. +Somni said we got along because I was being victimized by the same forces of gaslighting and wasn't lying about my agenda. Maybe she _should_ be complaining about me?—but I seemed to be following a somewhat earnest epistemic process, whereas Kelsey, Scott, and Anna were not. If I were to start going, "Here's my rationality org; rule #1: no transfems (except me); rule #2, no telling people about rule #1", then she would talk about it. -I would later remark to Anna that Somni and Ziz saw themselves as being oppressed by people's hypocritical and manipulative social perceptions and behavior. Merely using the appropriate language ("Somni ... she", _&c._) protected her against threats from the Political Correctness police, but it actually didn't protect against threats from _them_. It was as if the mere fact that I wasn't optimizing for PR (lying about my agenda, as Somni said) was what made me not a direct enemy (although still a collaborator) in their eyes. - --------- - -I had a phone call with Michael in which he took issue with Anna having described Ziz as having threatened to kill Gwen, when that wasn't a fair paraphrase of what Ziz's account actually said.[^ziz-gwen-account] In Michael's view, this was tantamount to indirect attempted murder using the State as a weapon to off her organization's critics: Anna casting Ziz as a Scary Bad Guy in [the improv scene of social reality](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) is the kind of maneuver that contributes to the legal system ruining weird people's lives with spurious charges, because weird gets [cast as villains in the act](https://unstableontology.com/2018/11/17/act-of-charity/). - -[^ziz-gwen-account]: The relevant passage from [one of Ziz's memoir posts](https://archive.ph/an5rp#selection-419.0-419.442) is: - - > I said if they were going to defend a right to be attacking me on some level, and treat fighting back as new aggression and cause to escalate, I would not at any point back down, and if our conflicting definitions of the ground state where no further retaliation was necessary meant we were consigned to a runaway positive feedback loop of revenge, so be it. And if that was true, we might as well try to kill each other right then and there. - - Talking about murder hypothetically as the logical game-theoretic consequence of a revenge spiral isn't the same thing as directly threatening to kill someone. (In context, it's calling a bluff: Ziz is saying that if Gwen was asserting a right to mooch off Ziz, then they might as well kill each other; by _modus tollens_, if they don't kill each other, then Gwen's assertion wasn't serious.) I wasn't sure what exact words Anna had used in her alleged paraphrase; Michael didn't remember the context when I asked him later. - -I told Michael that this made me think I might need to soul-search about having been complicit with injustice, but I couldn't clearly articulate why. - -I figured it out later (Subject: "complicity and friendship"). I think part of my emotional reaction to finding out about Ziz's legal trouble was the hope that it would lead to less pressure on Anna. I had been nagging Anna a lot on the theme of "rationality actually requires free speech", and she would sometimes defend her policy of guardedness on the grounds of (my paraphrase:), "Hey, give me some credit, oftentimes I do take a calculated risk of telling people things. Or I did, but then ... Ziz." - -I think at some level, I was imagining being able to tell Anna, "See, you were so afraid that telling people things would make enemies, and you used Ziz as evidence that you weren't cautious enough. But look, Ziz _isn't going to be a problem for you anymore_. Your fear of making enemies actually happened, and you're fine! This is evidence in favor of my view that you were far too cautious, rather than not being cautious enough!" - -But that was complicit with injustice, because the _reason_ I felt that Ziz wasn't going to be a problem for Anna anymore was because Ziz's protest ran afoul of the cops, which didn't have anything to do with the merits of Ziz's claims against Anna. I still wanted Anna to feel safer to speak, but I now realized that more specifically, I wanted Anna to feel safe _because_ Speech can actually win. Feeling safe because one's enemies can be crushed by the state wasn't the same thing. +I would later remark to Anna that Somni and Ziz saw themselves as being oppressed by people's hypocritical and manipulative social perceptions and behavior. Merely using the appropriate language ("Somni ... she", _&c._) protected her against threats from the Political Correctness police, but it actually didn't protect against threats from the Zizians. The mere fact that I wasn't optimizing for PR (lying about my agenda, as Somni said) was what made me not a direct enemy (although still a collaborator) in their eyes. -------- @@ -499,51 +481,25 @@ I also polished and pulled the trigger on ["On the Argumentative Form 'Super-Pro On _Less Wrong_, the mods had just announced [a new end-of-year Review event](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review), in which the best post from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in _2018_.) -This provided me with [an affordance](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review?commentId=d4RrEizzH85BdCPhE) to write some "defensive"[^defensive] posts, critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), appealing to our [academically standard theory of how context affects meaning](https://plato.stanford.edu/entries/implicature/) to explain why "decoupling _vs._ contextualizing norms" is a false dichotomy. +This provided me with [an affordance](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review?commentId=d4RrEizzH85BdCPhE) to write some posts critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), appealing to our [academically standard theory of how context affects meaning](https://plato.stanford.edu/entries/implicature/) to explain why "decoupling _vs._ contextualizing norms" is a false dichotomy. -[^defensive]: Criticism is "defensive" in the sense of trying to _prevent_ new beliefs from being added to our shared map; a critic of an idea "wins" when the idea is not accepted (such that the set of accepted beliefs remains at the _status quo ante_). +More significantly, in reaction to Yudkowsky's ["Meta-Honesty: Firming Up Honesty Around Its Edge Cases"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), I published ["Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly),[^not-lying-title] explaining why I thought "Meta-Honesty" was relying on an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people without technically lying. -More significantly, in reaction to Yudkowsky's ["Meta-Honesty: Firming Up Honesty Around Its Edge Cases"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), I published ["Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly),[^not-lying-title] explaining why I thought "Meta-Honesty" was relying on an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people in practice without technically lying. +[^not-lying-title]: The ungainly title was softened from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless". -[^not-lying-title]: The ungainly title was "softened" from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless". +I thought this one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. The "hill of meaning in defense of validity" affair had been driven by Yudkowsky's obsession with not technically lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't _lied_). But his Sequences had [articulated a higher standard](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument) than merely not-lying. If he didn't remember, I could at least hope to remind everyone else. -I thought this one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. (Less shocking as the months rolled on, and I told myself to let the story end.) The "hill of meaning in defense of validity" affair had been been driven by Yudkowsky's pathological obsession with not-technically-lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter that my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't lied). But his Sequences had [articulated a higher standard](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument) than merely not-lying. If he didn't remember, I could at least hope to remind everyone else. - -I also wrote a little post, ["Free Speech and Triskadekaphobic Calculators"](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to), arguing that it should be easier to have a rationality/alignment community that just does systematically correct reasoning, rather than a politically-savvy community that does systematically correct reasoning except when that would taint AI safety with political drama, analogously to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic except that it never displays the result 13. In order to build a "[triskadekaphobic](https://en.wikipedia.org/wiki/Triskaidekaphobia) calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7`, but also the infinite family of calculations that included 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition. - ------ - -During a phone call around early December 2019, Michael had pointed out that since [MIRI's 2019 fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) was going on, and we had information about how present-day MIRI differed from its marketing story, there was a time-sensitive opportunity to reach out to a perennial major donor, whom I'll call "Ethan", and fill him in on what we thought we knew about the Blight. - -On 14 December 2019, I wrote to Jessica and Jack Gallagher, asking how we should organize this. (Jessica and Jack had relevant testimony about working at MIRI, which would be of more central interest to "Ethan" than my story about how the "rationalists" had lost their way.) Michael also mentioned "Tabitha", a lawyer who had been in the MIRI orbit for a long time, as another person to talk to. - -About a week later, I apologized, saying that I wanted to postpone setting up the meeting, partially because I was on a roll with my productive blogging spree, and partially for a psychological reason: I was feeling subjective pressure to appease Michael by doing the thing that he explicitly suggested because of my loyalty to him, but that would be wrong, because Michael's ideology said that people should follow their sense of opportunity rather than obeying orders. I might feel motivated to reach out to "Ethan" and "Tabitha" in January. - -Michael said that implied that my sense of opportunity was driven by politics, and that I believed that simple honesty couldn't work; he only wanted me to acknowledge that. I was not inclined to affirm that characterization; it seemed like any conversation with "Ethan" and "Tabitha" would be partially optimized to move money, which I thought was politics. - -Jessica pointed out that "it moves money, so it's political" was erasing the non-zero-sum details of the situation. If people can make better decisions (including monetary ones) with more information, then informing them was pro-social. If there wasn't any better decisionmaking from information to be had, and all speech was just a matter of exerting social pressure in favor of one donation target over another, then that would be politics. - -I agreed that my initial "it moves money so it's political" intuition was wrong. But I didn't think I knew how to inform people about giving decisions in an honest and timely way, because the arguments [written above the bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) were an entire traumatic worldview shift. You couldn't just say "CfAR is fraudulent, don't give to them" without explaining things like ["bad faith is a disposition, not a feeling"](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) as prerequisites. I felt more comfortable trying to share the worldview update in January even if it meant the December decision would be wrong, because I didn't know how to affect the December decision in a way that didn't require someone to trust my judgment. - -Michael wrote: - -> That all makes sense to me, but I think that it reduces to "political processes are largely processes of spontaneous coordination to make it impossible to 'just be honest' and thus to force people to engage in politics themselves. In such a situation one is forced to do politics in order to 'just be honest', even if you would greatly prefer not to". -> -> This is surely not the first time that you have heard about situations like that. - -I ended up running into "Ethan" at the grocery store in early 2020, and told him that I had been planning to get in touch with him. (I might have mentioned the general topic, but I didn't want to get into a long discussion at the grocery store.) - -COVID hit shortly thereafter. I never got around to following up. +I also wrote a little post, ["Free Speech and Triskadekaphobic Calculators"](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to), arguing that it should be easier to have a rationality/alignment community that just does systematically correct reasoning than a politically savvy community that does systematically correct reasoning except when that would taint AI safety with political drama, analogous to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic except that it never displays the result 13. In order to build a "[triskadekaphobic](https://en.wikipedia.org/wiki/Triskaidekaphobia) calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7` but also the infinite family of calculations that include 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition. ------ -On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking about asking about autogynephilia on the next _Slate Star Codex_ survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult for designing the question. After reassuring him that he shouldn't worry about answering being painful for me ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend [Tailcalled](https://surveyanon.wordpress.com/), who I thought was more qualified than me on both counts. (Tailcalled had a lot of experience running surveys, and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents.) +On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking about asking about autogynephilia on the next _Slate Star Codex_ survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult. After reassuring him that he shouldn't worry about answering being unpleasant ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend [Tailcalled](https://surveyanon.wordpress.com/), who had a lot of experience conducting surveys and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents. The next day (I assume while I happened to be on his mind), Scott also [commented on](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=LJp2PYh3XvmoCgS6E) "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation." I was frustrated with his reply, which I felt was not taking into account points that I had already covered in detail. A few days later, on the twenty-fourth, I [succumbed to](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=xEan6oCQFDzWKApt7) [the temptation](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=wFRtLj2e7epEjhWDH) [to blow up at him](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=8DKi7eAuMt7PBYcwF) in the comments. -After commenting, I noticed that maybe Christmas Eve wasn't the best time to blow up at someone like that, and added a few more messages to our Discord chat— +After commenting, I noticed what day it was and added a few more messages to our Discord chat— > okay, maybe speech is sometimes painful > the _Less Wrong_ comment I just left you is really mean @@ -558,47 +514,45 @@ And then, as an afterthought— > oh, I guess we're Jewish > that attenuates the "is a hugely inappropriately socially-aggressive blog comment going to ruin someone's Christmas" fear somewhat -Scott messaged back at 11:08 _a.m._ the next morning, Christmas Day. He explained that the thought process behind his comment was that he still wasn't sure where we disagreed, and didn't know how to proceed except to dump his understanding of the philosophy (which would include things I already knew) and hope that I could point to the step I didn't like. He didn't know how to convincingly-to-me demonstrate his sincerity and rebut my accusations of him motivatedly playing dumb (which he was inclined to attribute to the malign influence of Michael Vassar's gang). +Scott messaged back at 11:08 the next morning, Christmas Day. He explained that the thought process behind his comment was that he still wasn't sure where we disagreed, and didn't know how to proceed except to dump his understanding of the philosophy (which would include things I already knew) and hope that I could point to the step I didn't like. He didn't know how to convince me of his sincerity and rebut my accusations of him motivatedly playing dumb (which he was inclined to attribute to the malign influence of Michael Vassar's gang). -I explained that the reason I accused him of being motivatedly dumb was that I _knew_ he knew about strategic equivocation, because he taught everyone else about it (as in his famous posts about [the motte-and-bailey doctrine](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/), or [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world)). And so when he acted like he didn't get it when I pointed out that this also applied to "trans women are women", that just seemed _implausible_. +I explained that the reason for those accusations was that I _knew_ he knew about strategic equivocation, because he taught everyone else about it (as in his famous posts about [the motte-and-bailey doctrine](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) and [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world)). And so when he acted like he didn't get it when I pointed out that this also applied to "trans women are women", that just seemed _implausible_. He asked for a specific example. ("Trans women are women, therefore trans women have uteruses" being a bad example, because no one was claiming that.) I quoted [an article from the _The Nation_](https://web.archive.org/web/20191223235051/https://www.thenation.com/article/trans-runner-daily-caller-terry-miller-andraya-yearwood-martina-navratilova/): "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women _are in fact women_." Scott agreed that this was stupid and wrong and a natural consequence of letting people use language the way he was suggesting (!). -I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as I could before being allowed to object to this kind of chicanery. I thought "pragmatic" reasons to not just use the natural clustering that you would get by impartially running a clustering algorithm on the subspace of configuration space relevant to your goals, basically amounted to "wireheading" (optimizing someone's map for looking good rather than reflecting the territory) or "war" (optimizing someone's map to not reflect the territory in order to manipulate them). If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it actually wasn't. That's not actually a nice thing to do to a rationalist. - -Scott thought that trans people had some weird thing going on in their brains such that being referred to as their natal sex was intrinsically painful, like an electric shock. The thing wasn't an agent, so the [injunction to refuse to give in to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) didn't apply. Having to use a word other than the one you would normally use in order to not subject someone to painful electric shocks was worth it. +I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as I could before being allowed to object to this kind of chicanery. I thought "pragmatic" reasons to not just use the natural clustering that you would get by impartially running a clustering algorithm on the subspace of configuration space relevant to your goals, basically amounted to "wireheading" (optimizing someone's map for looking good rather than reflecting the territory) or "war" (optimizing someone's map to not reflect the territory in order to manipulate them). If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it wasn't. That's not a nice thing to do to a rationalist. -I thought I knew things about the etiology of transness such that I didn't think the electric shock was inevitable, but I didn't want the conversation to go there if it didn't have to, because I didn't have to ragequit the so-called "rationalist" community over a complicated empirical question; I only had to ragequit over bad philosophy. +Scott thought that trans people had some weird thing going on in their brains such that being referred to as their natal sex was intrinsically painful, like an electric shock. The thing wasn't an agent, so the [injunction to refuse to give in to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) didn't apply. Having to use a word other than the one you would normally use in order to avoid subjecting someone to painful electric shocks was worth it. -Scott said he might agree with me if he thought the world-model-clarity _vs._ utilitarian benefit tradeoff was unfavorable—or if he thought it had the chance of snowballing like in his "Kolmogorov Complicity and the Parable of Lighting". +I thought I knew things about the etiology of transness such that I didn't think the electric shock was inevitable, but I didn't want the conversation to go there if it didn't have to. I didn't have to ragequit the so-called "rationalist" community over a complicated empirical question, only over bad philosophy. Scott said he might agree with me if he thought the tradeoff were unfavorable between clarity and utilitarian benefit—or if he thought it had the chance of snowballing like in his "Kolmogorov Complicity and the Parable of Lightning". I pointed out that what sex people are is more relevant to human social life than whether lightning comes before thunder. He said that the problem in his parable was that people were being made ignorant of things, whereas in the transgender case, no one was being kept ignorant; their thoughts were just following a longer path. I was skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs doesn't seem like a culture that could solve AI alignment. -I linked to Zvi Mowshowitz's post about how [the claim that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) gets used an excuse to silence people trying to point out the thing: "'Everybody knows' our kind of trans women are sampled from (part of) the male multivariate trait distribution rather than the female multivariate trait distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew.[^survey-whether-everyone-knows] I thought the people who sort-of knew were being intimidated into doublethinking around it. +I linked to Zvi Mowshowitz's post about how [the claim that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) gets used to silence people trying to point out the thing: "'Everybody knows' our kind of trans women are sampled from (part of) the male multivariate trait distribution rather than the female multivariate trait distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew.[^survey-whether-everyone-knows] I thought the people who sort-of knew were being intimidated into doublethinking around it. [^survey-whether-everyone-knows]: On this point, it may be instructive to note that a 2023 survey [found that only 60% of the UK public knew that "trans women" were born male](https://www.telegraph.co.uk/news/2023/08/06/third-of-britons-dont-know-trans-women-born-male/). -At this point, it was almost 2 _p.m._ (the paragraphs above summarize a larger volume of typing), and Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war. (I thought my "you need accurate models before you can do utilitarianism" philosophy was also near the root of Ben's objections to the EA movement.) +At this point, it was almost 2 _p.m._ (the paragraphs above summarize a larger volume of typing), and Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war. -When I arrived at the party, people were doing a reading of [the "Hero Licensing" dialogue epilogue](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing) to [_Inadequate Equilibria_](https://equilibriabook.com/toc/), with Yudkowsky himself playing the Mysterious Stranger. At some point, Scott and I retreated upstairs to continue our discussion. By the end of it, I was at least feeling more assured of Scott's sincerity, if not his competence. Scott said he would edit in a disclaimer note at the end of "... Not Man for the Categories". +When I arrived at the party, people were doing a reading of [the "Hero Licensing" dialogue epilogue](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing) to [_Inadequate Equilibria_](https://equilibriabook.com/toc/), with Yudkowsky himself playing the Mysterious Stranger. At some point, Scott and I retreated upstairs to continue our discussion. By the end of it, I was feeling more assured of Scott's sincerity, if not his competence. Scott said he would edit in a disclaimer note at the end of "... Not Man for the Categories". -It would have been interesting if I also got the chance to talk to Yudkowsky for a few minutes, but if I did, I wouldn't be allowed to recount any details of that here due to the privacy norms I'm following in this post. +It would have been interesting if I also got the chance to talk to Yudkowsky for a few minutes, but if I did, I wouldn't be allowed to recount any details of that here due to the privacy norms I'm following. The rest of the party was nice. People were reading funny GPT-2 quotes from their phones. At one point, conversation happened to zag in a way that let me show off the probability fact I had learned during Math and Wellness Month. A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context. -All in all, I was feeling less ragequitty about the rationalists[^no-scare-quotes] after the party—as if by credibly threatening to ragequit, the elephant in my brain had managed to extort more bandwidth from our leadership. The note Scott added to the end of "... Not Man for the Categories" still betrayed some philosophical confusion, but I now felt hopeful about addressing that in a future blog post explaining my thesis that unnatural category boundaries were for "wireheading" or "war", rather than assuming that anyone who didn't get the point from "... Boundaries?" was lying or retarded. +All in all, I was feeling less ragequitty about the rationalists[^no-scare-quotes] after the party—as if by credibly threatening to ragequit, the elephant in my brain had managed to extort more bandwidth from our leadership. The note Scott added to the end of "... Not Man for the Categories" still betrayed some philosophical confusion, but I now felt hopeful about addressing that in a future blog post explaining my thesis that unnatural category boundaries were for "wireheading" or "war". [^no-scare-quotes]: Enough to not even scare-quote the term here. -It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence, is going to sense different trade-offs around Kolmogorov complicity strategies, than an ordinary programmer or mere worm focused on _things that don't matter_.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess)? +It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence is going to sense different trade-offs around Kolmogorov complicity strategies, than an ordinary programmer or mere worm focused on _things that don't matter_.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess)? Another effect of my feeling better after the party was that my motivation to keep working on my memoir of the Category War vanished—as if I was still putting weight on a [zero-sum frame](https://unstableontology.com/2019/09/10/truth-telling-is-aggression-in-zero-sum-frames/) in which the memoir was a nuke that I only wanted to use as an absolute last resort. Ben wrote (Subject: "Re: state of Church leadership"): -> It seems to that according to Zack's own account, even writing the memoir _privately_ feels like an act of war that he'd rather avoid, not just using his own territory as he sees fit to create _internal_ clarity around a thing. +> It seems to [me] that according to Zack's own account, even writing the memoir _privately_ feels like an act of war that he'd rather avoid, not just using his own territory as he sees fit to create _internal_ clarity around a thing. > > I think this has to mean _either_ > (a) that Zack isn't on the side of clarity except pragmatically where that helps him get his particular story around gender and rationalism validated @@ -609,7 +563,7 @@ Or, I pointed out, (c) I had ceded the territory of the interior of my own mind "Riley" reassured me that finishing the memoir privately would be clarifying and cathartic _for me_. If people in the Caliphate came to their senses, I could either not publish it, or give it a happy ending where everyone comes to their senses. -(It does not, actually, have a happy ending where everyone comes to their senses.) +(It does not have a happy ending where everyone comes to their senses.) ------ -- 2.17.1