X-Git-Url: http://unremediatedgender.space/source?p=Ultimately_Untrue_Thought.git;a=blobdiff_plain;f=notes%2Fi-tell-myself-sections.md;fp=notes%2Fi-tell-myself-sections.md;h=cd02ad1f72e1241fad84a21728f84155fbc03879;hp=7fbd40bf9b43322137a95df5e80faf3a2b63124d;hb=09b84cacb9de4c8cae5ab24b2227e4bc301c4662;hpb=d6c7f3ea75d38dfe1e4b257a0db0e4f213768455 diff --git a/notes/i-tell-myself-sections.md b/notes/i-tell-myself-sections.md index 7fbd40b..cd02ad1 100644 --- a/notes/i-tell-myself-sections.md +++ b/notes/i-tell-myself-sections.md @@ -99,6 +99,10 @@ Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. +If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not directly depend on being able to think clearly about trans issues. In the same way, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine). + +The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2019 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialist algorithm that counts unpopularity as a cost), also can't get heliocentrism right in 1632 _for the same reason_—and I really doubt it can get AI alignment theory right in 2039. + If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where the community doesn't exist at all. ----- @@ -184,3 +188,6 @@ I don't doubt Serano's report of her own _experiences_. But "it became obvious t ----- [You "can't" define a word any way you want, or you "can"—what actually matters is the math] + +---- +