The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041.
-Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) and [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work.
+Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about (like, say, the role of Bayesian reasoning in the philosophy of language). Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) and [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work.
-So, a striking thing about my series of increasingly frustrating private conversations and subsequent public Facebook meltdown (the stress from which soon landed me in psychiatric jail, but that's [another](/2017/Mar/fresh-princess/) [story](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)) was the tendency for some threads of conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) by Scott Alexander, the _second_ most prominent writer in our robot cult.
+So, a striking thing about my series of increasingly frustrating private conversations and subsequent public Facebook meltdown (the stress from which soon landed me in psychiatric jail, but that's [another](/2017/Mar/fresh-princess/) [story](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)) was the tendency for some threads of conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 blog post by Scott Alexander, the _second_ most prominent writer in our robot cult.
So, this _really_ wasn't what I was trying to talk about; _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory in psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—not just as a formal profession of modesty, but _actually_ wrong in the real world.
At first I did this in the object-level context of gender on this blog, in ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), and the ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/).
-Later, after [Eliezer Yudkowsky joined in the mind games on Twitter in November 2018](https://twitter.com/ESYudkowsky/status/1067183500216811521), I _flipped the fuck out_, and ended up doing more [stictly abstract philosophy-of-language work](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [on](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [the](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception) [robot](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution)-[cult](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) [blog](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception).
+Later, after [Eliezer Yudkowsky joined in the mind games on Twitter in November 2018](https://twitter.com/ESYudkowsky/status/1067183500216811521) [(archived)](https://archive.is/ChqYX), I _flipped the fuck out_, and ended up doing more [stictly abstract philosophy-of-language work](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [on](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [the](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception) [robot](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution)-[cult](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) [blog](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception).
-An important thing to appreciate is that the philosophical point I was trying to make has _absolutely nothing to do with gender_. In 2008, Yudkowsky had explained (with math!!) that _for all_ nouns N, you can't define _N_ any way you want, because _useful_ definitions need to "carve reality at the joints."
+An important thing to appreciate is that the philosophical point I was trying to make has _absolutely nothing to do with gender_. In 2008, Yudkowsky had explained that _for all_ nouns N, you can't define _N_ any way you want, because _useful_ definitions need to "carve reality at the joints."
+It [_follows logically_](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization) that, in particular, if _N_ := "woman", you can't define the word _woman_ any way you want. Maybe trans women _are_ women! But if so—that is, if you want people to agree to that word usage—you need to be able to _argue_ for why it makes sense on the empirical merits; you can't just _define_ it to be true, and this is a _general_ principle of how language works, not something I made up on the spot in order to stigmatize trans people.
-[TODO:
+In 2008, the general philosophy of language lesson was _not politically controversial_. If, in 2018–present, it _is_ politically controversial (specifically because of the fear that someone will try to apply it with _N_ := "woman"), that's a _problem_ for our whole systematically-correct-reasoning project! What counts as good philosophy—or even good philosophy _pedagogy_—shouldn't depend on the current year!
-I got a little bit of pushback due to the perception
+There is a _sense in which_ one might say that you "can" define a word any way you want. That is: words don't have intrinsic ontologically-basic meanings. We can imagine an alternative world where people spoke a language that was _like_ the English of our world, except that they use the word "tree" to refer to members of the empirical entity-cluster that we call "dogs" and _vice versa_, and it's hard to think of a meaningful sense in which one convention is "right" and the other is "wrong".
+
+But there's also an important _sense in which_ we want to say that you "can't" define a word any way you want. That is: some ways of using words work better for transmitting information from one place to another. It would be harder to explain your observations from a trip to the local park in a language that used the word "tree" to refer to members of _either_ of the empirical entity-clusters that we call "dogs" and "trees", because grouping together things that aren't relevantly similar like that makes it harder to describe differences between the wagging-animal-trees and the leafy-plant-trees.
+
+If you want to teach people about the philosophy of language, you want to convey _both_ of these lessons, against naïve essentialism, and against naïve anti-essentialism.
+
+If the people who are widely recognized and trusted as the leaders of the systematically-correct-reasoning community
+
+https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly
+
+_Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, maybe? But the thing you have to realize is—
+
+_Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/). My robot-cult philosophy of language blogging (April 2019–January 2021) was a (stealthy) _response to_
-_Everyone else shot first_.
-"You can't define a word any way you want" and "You can" are both true in different senses, but if you opportunistically choose which one to emphasize
+
+When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade.
+
+
+
+
+[TODO:
+
+I got a little bit of pushback due to the perception
_I need the right answer in order to decide whether or not to cut my dick off_—if I were dumb enough to believe Yudkowsky's insinuation that pronouns don't have truth conditions, I might have made a worse decision
Reading the things I do, and talking to the people I do, I see this pattern _over and over and over_ again, where non-exclusively-androphilic trans women will, in the right context, describe experiences that _sound_ a lot like mine—having this beautiful pure sacred self-identity thing about the idea of being female, but also, separately, this erotic thing on the same theme—but then _somehow_ manage to interpret the beautiful pure sacred self-identity thing as an inner "gender" and presumed brain-intersex condition, which I just—can't take seriously. (Even before contrasting to the early-onset type, which is what a brain-intersex condition _actually_ looks like.)
-All I've been trying to say is that, _in particular_, the word "woman" is such a noun. Maybe trans women _are_ women! But if you want people to agree to that word usage, you need to be able to _argue_ for why it makes sense; you can't just _define_ it to be true, and this is a _general_ principle of how language works, not something I made up in order to stigmatize trans people.
+All I've been trying to say is that, _in particular_, the word "woman" is such a noun.
+
+It _follows logically_ that, in particular, if _N_ := "woman", you can't define the word _woman_ any way you want. Maybe trans women _are_ women! But if you want people to agree to that word usage, you need to be able to _argue_ for why it makes sense; you can't just _define_ it to be true, and this is a _general_ principle of how language works, not something I made up on the spot in order to stigmatize trans people.