From: M. Taylor Saotome-Westlake Date: Mon, 31 Oct 2022 02:04:12 +0000 (-0700) Subject: memoir: reducing negativity (to § end) X-Git-Url: http://unremediatedgender.space/source?p=Ultimately_Untrue_Thought.git;a=commitdiff_plain;h=c9b6c312d12646085b168082562fa075c7956b5e memoir: reducing negativity (to § end) I was having trouble getting started today, but I think I finally got some flow and did some real thinking?! --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 868b6e8..7d8e5f2 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -853,7 +853,7 @@ Did Yudkowsky get _new information_ about neoreaction's hidden Badness parameter However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the non-Culture-War context of him [_trying to wirehead his fiction subreddit by suppressing criticism-in-general_](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/). -That's _my_ characterization, of course: the post itself is about "reducing negativity". [In a comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine): +That's _my_ characterization, of course: the post itself talks about "reducing negativity". [In a followup comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine): > On discussion threads for a work's particular chapter, people may debate the well-executedness of some particular feature of that work's particular chapter. Comments saying that nobody should enjoy this whole work are still verboten. **Replies here should still follow the etiquette of saying "Mileage varied: I thought character X seemed stupid to me" rather than saying "No, character X was actually quite stupid."** @@ -863,28 +863,33 @@ But ... "I thought X seemed Y to me"[^pleonasm] and "X is Y" _do not mean the sa It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing _coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space_ made a pattern click. If everyone is forced to only make narcissistic claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](http://www.overcomingbias.com/2008/09/disagreement-is.html)), that's great for reducing social conflict, but it's not great for the kind of collective information processing that actually accomplishes cognitive work, like good literary criticism. A rationalist space _needs to be able to talk about the territory_. -I understand that Yudkowsky wouldn't agree with that characterization: to be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to _say_, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did. +I understand that Yudkowsky wouldn't agree with that characterization, and to be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to _say_, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did. -[TODO— +"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is _M_, the _i_-th commenter's estimate of mistakenness has an error term of _Ei_, and commenters leave a negative comment when their estimate _M_ + _Ei_ is greater than their threshold for commenting _Ti_, then the comments that get posted will have been selected for erroneous criticism (high _Ei_) and commmenter chattiness (low _Ti_). -"Broadcast criticism is adversely selected for critic errors", Yudkowsky says in the post on reducing negativity, correctly pointing out that if a work's true level of [finish math] +I can imagine some young person who really liked _Harry Potter and the Methods_ being intimidated by the math notation, and uncritically accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that the the math here is just [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean). The same argument applies to praise! - * I can imagine some young person who really liked _Harry Potter and the Methods_ being intimidated by the math notation, - * But a somewhat less young person - * I would expect a real rationality teach to teach the general lesson, "model selection effects" +What I would hope for from a rationality teacher and a rationality community, would be efforts to instill the _general_ skill of modeling things like regression to the mean and selection effects, as part of the general project of having a discourse that does collective information-processing. -"Credibly helpful unsolicited criticism should be delivered in private", says Yudkowsky. +And from the way Yudkowsky writes these days, it looks like he's ... not interested in collective information-processing? Or that he doesn't actually believe that's a real thing? "Credibly helpful unsolicited criticism should be delivered in private," he writes! I agree that the positive purpose of public criticism isn't solely to help the author. (If it were, there would be no reason for anyone but the author to read it.) But readers _do_ benefit from insightful critical commentary. (If they didn't, why would they read the comments section?) When I read a story, and am interested in reading the comments _about_ a story, it's because _I want to know what other readers were actually thinking about the work_. I don't _want_ other people to self-censor comments on any plot holes or [Fridge Logic](https://tvtropes.org/pmwiki/pmwiki.php/Main/FridgeLogic) they noticed for fear of dampening someone else's enjoyment or hurting the author's feelings. - * I agree that public criticism isn't meant to solely help the author (because if it were, there would be no reason for anyone but the author to read it) - * But other readers also benefit! - * And if you're going to talk about incentives, you _want_ people to be rewarded for making good criticism +Yudkowsky claims that criticism should be given in private because then the target "may find it much more credible that you meant only to help them, and weren't trying to gain status by pushing them down in public." I'll buy this as a reason why credibly _altruistic_ unsolicited criticism should be delivered in private. Indeed, meaning _only_ to help the target just doesn't seem like a plausible critic motivation in most cases. But the fact that critics typically have non-altruistic motives, doesn't mean criticism isn't helpful. In order to incentivize good criticism, you _want_ people to be rewarded with status for making good criticisms! You'd have to be some sort of communist to disagree with this. -Crocker's rules +There's a striking contrast between the Yudkowsky of 2019 who wrote the "Reducing Negativity" post, and an earlier Yudkowsky (from even before the Sequences) who maintained [a page on Crocker's rules](http://sl4.org/crocker.html): if you declare that you operate under Crocker's rules, you're consenting to other people optimizing their speech for conveying information rather than being nice to you. If someone calls you an idiot, that's not an "insult"; they're just informing you about the fact that you're an idiot, and you should plausibly thank them for the tip. (If you _were_ an idiot, wouldn't you be better off knowing rather than not-knowing?) - * it's true and important that Crocker's rules were meant to be declared by the speaker; it's not a license to be mean to other people who might not want that - * But there's still something special about a culture that has "Crocker's rules" as an available concept, that's completely absent from modern Yudkowsky +It's of course important to stress that Crocker's rules are _opt in_ on the part of the _receiver_; it's not a license to unilaterally be rude to other people. Adopting Crocker's rules as a community-level norm on an open web forum does not seem like it would end well. -] +Still, there's something precious about a culture where people appreciate the _obvious normative ideal_ underlying Crocker's rules, even if social animals can't reliably live up to the normative ideal. Speech is for conveying information. People can say things—even things about me or my work—not as a command, or as a reward or punishment, but just to establish a correspondence between words and the world: a map that reflects a territory. + +Appreciation of this obvious normative ideal seems almost entirely absent from Yudkowsky's modern work—as if he's given up on the idea that using Speech in public in order to reason is useful or possible. + +The "Reducing Negativity" post also warns against the failure mode of attempted "author telepathy": _attributing_ bad motives to authors and treating those attributions as fact without accounting for uncertainty or distinguishing observations from inferences. I should be explicit, then: when I say negative things about Yudkowsky's state of mind, like it's "as if he's given up on the idea that reasoning in public is useful or possible", that's definitely an inference, not an observation. I definitely don't think Yudkowsky _thinks of himself_ as having given up on Speech _in those words_. + +Why attribute motives to people that they don't attribute to themselves, then? Because I need to, in order to make sense of the world. Words aren't imbued with intrinsic "meaning"; just to _interpret_ text entails building some model of the mind on the other side. + +The text that Yudkowsky emitted in 2007–2009 made me who I am. The text that Yudkowsky has emitted since at least March 2016 _looks like_ it's being generated by a different and _much less trustworthy_ process. According to the methods I was taught in 2007–2009, I have a _duty_ to notice the difference, and try to make sense of the change—even if I'm not a superhuman neuroscience AI and have no hope of getting it right in detail. And I have a right to try to describe the change I'm seeing to you. + +_Good_ criticism is hard. _Accurately_ inferring authorial ["intent"](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) is much harder. There is certainly no shortage of bullies in the world eager to make _bad_ criticism or _inaccurately_ infer authorial intent in order to achieve their social goals. But I don't think that's a good reason to give up on _trying_ to do good criticism and accurate intent-attribution. If there's any hope for humans to think together and work together, it has to go though distiguishing good criticism from bad criticism, and treating them differently. Suppressing criticism-in-general is intellectual suicide. ----- diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index c89d969..c968487 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -38,6 +38,10 @@ _ more examples of Yudkowsky's arrogance far editing tier— +_ mention Said rigor check somewhere, nervousness about Michael's gang being a mini-egregore +_ at some point, speculate on causes of brain damage +_ the "reducing negativity" post does obliquely hint at the regression point being general ("Let's say that the true level of negativity"), does that seriously undermine my thesis, or does it only merit a footnote? +_ worth footnoting the "some sort of communist" joke? _ pull "agreeing with Stalin" quote earlier in ms. to argue that Yudkowsky apparently doesn't disagree with my "deliberately ambiguous" _ elaborate on why I'm not leaking sensitive bits, by explaining what can be inferred by what was and wasn't said in public _ footnote on "no one would even consider" @@ -1212,6 +1216,7 @@ If you _don't_ have intent-to-inform, but make sure to never, ever say false thi bitter comments about rationalists— https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE +https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare?commentId=Nhv9KPte7d5jbtLBv https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7 ------ @@ -1482,8 +1487,6 @@ Alicorn writes (re Kelsey's anorexia): "man it must be so weird to have a delusi what's really weird is having a delusion, knowing it's a delusion, and _everyone else_ insists your delusion is true ... and I'm not allowed to say that without drawing on my diplomacy budget, which puts a permanent distance between me and the group -you can't imagine contemporary Yudkowsky adhering to Crocker's rules (http://sl4.org/crocker.html) - (If you are silent about your pain, _they'll kill you and say you enjoyed it_.) 4 levels of intellectual conversation https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/