From 05fdbe3776ba25f3c8ae93aeff0c82d04e0b43a3 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 1 Sep 2019 11:50:38 -0700 Subject: [PATCH] check in --- ...fect-sizes-of-cognitive-sex-differences.md | 2 +- .../i-tell-myself-to-let-the-story-end.md | 131 +------------- notes/i-tell-myself-notes.txt | 162 ++++++++++++++++++ notes/post_ideas.txt | 29 ++-- notes/tweet_pad.txt | 2 + 5 files changed, 185 insertions(+), 141 deletions(-) create mode 100644 notes/i-tell-myself-notes.txt diff --git a/content/drafts/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences.md b/content/drafts/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences.md index 2e34f0b..979425c 100644 --- a/content/drafts/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences.md +++ b/content/drafts/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences.md @@ -52,7 +52,7 @@ print(naïve_d) # 0.8953395386313235 But doesn't a similar argument hold for non-error sources of variance that are "orthogonal" to the group difference? (Sorry, I know this is vague; I'm writing to the list in case any Actual Scientists can spare a moment to help me make my intuition more precise.) Like, suppose performance on some particular cognitive task can be modeled as the sum of the general intelligence factor (zero or negligible sex difference[^jensen]), and a special ability factor that does show sex differences. Then, even with zero _measurement_ error, _d_ would underestimate the difference between women and men _of the same general intelligence_. -[^jensen]: Arthur Jensen, _The g Factor_, Chapter 13 +[^jensen]: Arthur Jensen, _The g Factor_, Chapter 13: "Although no evidence was for sex differences in the mean level of _g_ or in the variability of _g_, there is clear evidence of marked sex differences in group factors and in test specificity. Males, on average, excel on some factors; females on others. [...] But the best available evidence fails to show a sex difference in _g_." ```python def performance(g, σ_g, s, n): diff --git a/content/drafts/i-tell-myself-to-let-the-story-end.md b/content/drafts/i-tell-myself-to-let-the-story-end.md index 33e9c2c..85dc84d 100644 --- a/content/drafts/i-tell-myself-to-let-the-story-end.md +++ b/content/drafts/i-tell-myself-to-let-the-story-end.md @@ -1,7 +1,7 @@ Title: "I Tell Myself to Let the Story End"; Or, A Hill of Validity in Defense of Meaning Date: 2020-01-01 -Category: commentary -Tags: personal +Category: other +Tags: personal, my robot cult Status: draft > _And I tell myself to let the story end @@ -14,130 +14,3 @@ Status: draft > > —Sara Barellies, ["Gonna Get Over You"](https://genius.com/Sara-bareilles-gonna-get-over-you-lyrics) -I mostly haven't been doing so well for the past eight months or so. I've been reluctant to write about it in too much detail for poorly-understood psychological reasons. Maybe it feels too much like attacking my friends? Maybe I'm not sure how much I can say without leaking too much information from private conversations? But I need to write _something_—not to attack anyone or spill anyone's secrets, but just to _tell the truth_ about why I've been wasting stretches of days in _constant emotional pain_ all year. For my own healing, for my own sanity. - -So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _absolute truthseeking_. Sure, anyone will _say_ that their beliefs are true, but you can tell most people aren't being very serious about it. _We_ were going to be serious: starting with the shared canon of knowledge of cognitive biases, reflectivity, and Bayesian probability theory bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/), to forge and refine a new mental martial art of _systematically correct reasoning_ that we were going to use to optimize ourselves and the world. - -(Oh, and there was also this part about how the uniquely best thing for non-math-geniuses to do with their lives was to earn lots of money and donate it to our founder's nonprofit dedicated to building a recursively self-improving artificial superintelligence to take over the world in order to save our entire future light cone from the coming robot apocalypse. That part's complicated.) - -I guess I feel pretty naïve now, but—I _actually believed our own propoganda_. I _actually thought_ we were doing something new and special of historical and possibly even _cosmological_ significance. - -And so when I moved to "Portland" (which is actually Berkeley) in 2016, met a lot of trans women in real life for the first time, and did some more reading that convinced me of the at-least-approximate-correctness of the homosexual/autogynephilic two-types theory of MtF transgenderism that I had previously assumed was false (while being privately grateful that [there was a _word_ for my thing](/2017/Feb/a-beacon-through-the-darkness-or-getting-it-right-the-first-time/)) because everyone _said_ it was false - - - -We're all about, like, science and rationality and stuff, right? And so if there's a theory that's been sitting in the psychology literature for twenty years, that looks _correct_ (or at least, ah, [less wrong](https://tvtropes.org/pmwiki/pmwiki.php/Main/TitleDrop) than the mainstream view), that's _directly_ relevant to a _lot_ of people around here, that seems like the sort of thing - -https://www.lesswrong.com/posts/9KvefburLia7ptEE3/the-correct-contrarian-cluster - -I confess that I _may have [overreacted](/2017/Mar/fresh-princess/) [somewhat](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)_ when people weren't converging (or [even engaging](/2017/Jan/im-sick-of-being-lied-to/)) with me on the two-types/autogynephilia thing. Psychology is a genuinely difficult empirical science - -I would _never_ write someone off for disagreeing with me about a complicated empirical thing, because complicated empirical things are complicated enough that I have to [take the Outside View seriously](https://www.overcomingbias.com/2007/07/beware-the-insi.html): no matter how "obvious" I think my view is, I might still be wrong for real in real life. So, while I was pretty upset for my own idiosyncratic personal reasons, it wasn't cause to _give up entirely on the dream of a rationality community_. - -A.T. and R.B.'s Facebook comments - -emphasize that the categories-are-relative thing is an important grain of truth, but that I expect _us_ to see deeper into the Bayes-structure - -this is _really basic shit_ - - - - - -The way this is supposed to work is that you just make your arguments and trust that good arguments will outcompete bad ones; emailing people begging for a clarification is kind of rude and I want to acknowledge the frame in which I'm the bad guy (or pitably mentally ill)—but I was taught that arguing with people when they're doing something wrong is actually doing them a _favor_—I was taught that it's virtuous to make an extraordinary effort - -bad-faith nitpicker—I would be annoyed if someone repeatedly begged me to correct a mistake I made in a blog post from five years ago or a Tweet from November - -Losing faith in guided-by-the-beauty-of-our-weapons - -https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science -http://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/ - -"I ought to accept ... within the conceptual boundaries" is a betrayal of everything we stand for - -(I don't consider "if it'll save someone's life" to be a compelling consideration here, for the same reason that "What if Omega punishes all agents who don't choose the alphabetically first option?" doesn't seem like a compelling argument against timeless decision theory. Specifying punishments for agents that don't follow a particular ritual of cognition doesn't help us understand the laws that make intelligence work.) - -when _yet another_ (higher-profile, but more careful, this time only committing the error by Grecian implicature rather than overtly—if you're being careful to disclaim most obvious misinterpretations) people comitting the fallacy, I _flipped out_ - -The sheer number of hours we invested in this operation is _nuts_: desperate all-out effort, arguing over email with two ppl who were higher-status than me and fighting an entire Discord server three times, $1000, three postcards - -what about _my_ mental health? - -Men who wish they were women do not particularly resemble actual women? We just—don't? This seems kind of obvious, really? - -my friend thinks I'm naive to have expected such a community—she was recommending "What You Can't Say" in 2009—but in 2009, we did not expect that _whether or not I should cut my dick off_ would _become_ a politicized issue, which is new evidence about the wisdom of the original vision - -but if my expectations (about the community) were wrong, that's a problem with my model; reality doesn't have to care - -it's naive to think you can win against an egregore 1000 times bigger than you - -MASSIVE cognitive dissonance, "What? What???" - -the Church - -won't you be embarrassed to leave if we create utopia - -invent a fake epistemology lesson - -we live in a world where reason doesn't work - -_not_ gaslight me about the most important thing in my life? - -I don't think I'm setting my price for joining particularly high here? - -if you're doing systematically correct reasoning, you should be able to get the right answer even on things that don't matter - -There could be similarly egregious errors that I'm not as sensitive too - -I don't think you can build an aligned superintelligence from a culture this crazy - -it's not fair to expect ordinary people to understand the univariate fallacy before they're allowed to say "men aren't women" - -maybe S. was trying to talk about "legal fiction" categories, but I'm trying to talk about epistemology, and that's a probable reading when you say "categories" - -Hansonian mental illness that people should be less accomodating of - -there has to be a way to apply a finite amount of effort to _correct_ errors, and possibly even profit from correcting errors - -(I'm not making this up! I _couldn't_ make this up!) - -the frame in which I'm - -outside view hobby-horse - -standards - -cognitive dissonance - -smart people clearly know - -free-speech norms - -I'll be alright. Just not tonight. - -maybe 50,000 years from now we'll all be galatic superminds and laugh about all this - -(probably tell the story with no external links, only this-blog links) - -I'll be alright. Just not tonight. But someday. - -Avoiding politically-expensive topics is fine! Fabricating bad epistemology to support a politically-convenient position and then not retracting it after someone puts a lot of effort into explaining the problem is not OK. - -I guess this issue is that the mob thinks that arguments are soldiers and doesn't understand local validity, and if you're trying to appease the mob, you're not even allowed to do the local-validity "This is a bad argument, but the conclusion might be true for other reasons" thing? - - -I think a non-stupid reason is that the way I talk has actually been trained really hard on this subculture for ten years: most of my emails during this whole campaign have contained multiple Sequences or Slate Star Codex links that I can just expect people to have read. I can spontaneously use the phrase "Absolute Denial Macro" in conversation and expect to be understood. That's a massive "home field advantage." If I just give up on "rationalists" being as sane as we were in 2009 (when we knew that men can't become women by means of saying so), and go out in the world to make intellectual friends elsewhere (by making friends with Quillette readers or arbitrary University of Chicago graduates), then I lose all that accumulated capital. The language I speak is mostly educated American English, but I rely on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (so speaks the language of STEM intellectuals generally), and when I showed her "... To Make Predictions", she reported finding it somewhat hard to read, probably because I casually use phrases like "thus, an excellent motte", and expect to be understood without the reader taking 10 minutes to read the link. This essay, which was me writing from the heart in the words that came most naturally to me, could not be published in Quillette. The links and phraseology are just too context-bound. - -Berkeley "rationalists" are very good at free speech norms and deserve credit for that! But it still feels like a liberal church where you can say "I believe in evolution" without getting socially punished. Like, it's good that you can do that. But I had a sense that more is possible: a place where you can not just not-get-punished for being an evolutionist, but a place where you can say, "Wait! Given all this evidence for natural selection as the origin of design in the biological world, we don't need this 'God' hypothesis anymore. And now that we know that, we can work out whatever psychological needs we were trying to fulfil with this 'church' organization, and use that knowledge to design something that does an even better job at fulfilling those needs!" and have everyone just get it, at least on the meta level. - -I can accept a church community that disagrees on whether evolution is true. (Er, on the terms of this allegory.) I can accept a church community that disagrees on what the implications are conditional on the hypothesis that evolution is true. I cannot accept a church in which the canonical response to "Evolution is true! God isn't real!" is "Well, it depends on how you choose to draw the 'God' category boundary." I mean, I agree that words can be used in many ways, and that the answer to questions about God does depend on how the asker and answerer are choosing to draw the category boundary corresponding to the English language word 'God'. That observation can legitimately be part of the counterargument to "God isn't real!" But if the entire counterargument is just, "Well, it depends on how you define the word 'God', and a lot of people would be very sad if we defined 'God' in a way such that it turned out to not exist" ... unacceptable! Absolutely unacceptable! If this is the peak of publicly acceptable intellectual discourse in Berkeley, CA, and our AI alignment research group is based out of Berkeley (where they will inevitably be shaped by the local culture), and we can't even notice that there is a problem, then we're dead! We're just fucking dead! Right? Right?? I can't be the only one who sees this, am I? What is Toronto?????? - -_everyone else shot first_, and I'm _correct on the merits_ - -competence forcing conclusions: http://www.sl4.org/archive/0602/13903.html - -language as an AI capability - -https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions - -"the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value. [...] There are many such shards of desire, all different values." \ No newline at end of file diff --git a/notes/i-tell-myself-notes.txt b/notes/i-tell-myself-notes.txt new file mode 100644 index 0000000..85a16b4 --- /dev/null +++ b/notes/i-tell-myself-notes.txt @@ -0,0 +1,162 @@ +I mostly haven't been doing so well for the past eight months or so. I've been reluctant to write about it in too much detail for poorly-understood psychological reasons. Maybe it feels too much like attacking my friends? Maybe I'm not sure how much I can say without leaking too much information from private conversations? But I need to write _something_—not to attack anyone or spill anyone's secrets, but just to _tell the truth_ about why I've been wasting stretches of days in _constant emotional pain_ all year. For my own healing, for my own sanity. + +So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _absolute truthseeking_. Sure, anyone will _say_ that their beliefs are true, but you can tell most people aren't being very serious about it. _We_ were going to be serious: starting with the shared canon of knowledge of cognitive biases, reflectivity, and Bayesian probability theory bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/), to forge and refine a new mental martial art of _systematically correct reasoning_ that we were going to use to optimize ourselves and the world. + +(Oh, and there was also this part about how the uniquely best thing for non-math-geniuses to do with their lives was to earn lots of money and donate it to our founder's nonprofit dedicated to building a recursively self-improving artificial superintelligence to take over the world in order to save our entire future light cone from the coming robot apocalypse. That part's complicated.) + +I guess I feel pretty naïve now, but—I _actually believed our own propoganda_. I _actually thought_ we were doing something new and special of historical and possibly even _cosmological_ significance. + +And so when I moved to "Portland" (which is actually Berkeley) in 2016, met a lot of trans women in real life for the first time, and did some more reading that convinced me of the at-least-approximate-correctness of the homosexual/autogynephilic two-types theory of MtF transgenderism that I had previously assumed was false (while being privately grateful that [there was a _word_ for my thing](/2017/Feb/a-beacon-through-the-darkness-or-getting-it-right-the-first-time/)) because everyone _said_ it was false + + +rebrand—or, failing that, disband—or, failing that, be destroyed. + + +We're all about, like, science and rationality and stuff, right? And so if there's a theory that's been sitting in the psychology literature for twenty years, that looks _correct_ (or at least, ah, [less wrong](https://tvtropes.org/pmwiki/pmwiki.php/Main/TitleDrop) than the mainstream view), that's _directly_ relevant to a _lot_ of people around here, that seems like the sort of thing + +https://www.lesswrong.com/posts/9KvefburLia7ptEE3/the-correct-contrarian-cluster + +I confess that I _may have [overreacted](/2017/Mar/fresh-princess/) [somewhat](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)_ when people weren't converging (or [even engaging](/2017/Jan/im-sick-of-being-lied-to/)) with me on the two-types/autogynephilia thing. Psychology is a genuinely difficult empirical science + +I would _never_ write someone off for disagreeing with me about a complicated empirical thing, because complicated empirical things are complicated enough that I have to [take the Outside View seriously](https://www.overcomingbias.com/2007/07/beware-the-insi.html): no matter how "obvious" I think my view is, I might still be wrong for real in real life. So, while I was pretty upset for my own idiosyncratic personal reasons, it wasn't cause to _give up entirely on the dream of a rationality community_. + +A.T. and R.B.'s Facebook comments + +emphasize that the categories-are-relative thing is an important grain of truth, but that I expect _us_ to see deeper into the Bayes-structure + +this is _really basic shit_ + + + + + +The way this is supposed to work is that you just make your arguments and trust that good arguments will outcompete bad ones; emailing people begging for a clarification is kind of rude and I want to acknowledge the frame in which I'm the bad guy (or pitably mentally ill)—but I was taught that arguing with people when they're doing something wrong is actually doing them a _favor_—I was taught that it's virtuous to make an extraordinary effort + +bad-faith nitpicker—I would be annoyed if someone repeatedly begged me to correct a mistake I made in a blog post from five years ago or a Tweet from November + +Losing faith in guided-by-the-beauty-of-our-weapons + +https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science +http://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/ + +"I ought to accept ... within the conceptual boundaries" is a betrayal of everything we stand for + +(I don't consider "if it'll save someone's life" to be a compelling consideration here, for the same reason that "What if Omega punishes all agents who don't choose the alphabetically first option?" doesn't seem like a compelling argument against timeless decision theory. Specifying punishments for agents that don't follow a particular ritual of cognition doesn't help us understand the laws that make intelligence work.) + +when _yet another_ (higher-profile, but more careful, this time only committing the error by Grecian implicature rather than overtly—if you're being careful to disclaim most obvious misinterpretations) people comitting the fallacy, I _flipped out_ + +The sheer number of hours we invested in this operation is _nuts_: desperate all-out effort, arguing over email with two ppl who were higher-status than me and fighting an entire Discord server three times, $1000, three postcards + +what about _my_ mental health? + +Men who wish they were women do not particularly resemble actual women? We just—don't? This seems kind of obvious, really? + +my friend thinks I'm naive to have expected such a community—she was recommending "What You Can't Say" in 2009—but in 2009, we did not expect that _whether or not I should cut my dick off_ would _become_ a politicized issue, which is new evidence about the wisdom of the original vision + +but if my expectations (about the community) were wrong, that's a problem with my model; reality doesn't have to care + +it's naive to think you can win against an egregore 1000 times bigger than you + +MASSIVE cognitive dissonance, "What? What???" + +the Church + +won't you be embarrassed to leave if we create utopia + +invent a fake epistemology lesson + +we live in a world where reason doesn't work + +_not_ gaslight me about the most important thing in my life? + +I don't think I'm setting my price for joining particularly high here? + +if you're doing systematically correct reasoning, you should be able to get the right answer even on things that don't matter + +There could be similarly egregious errors that I'm not as sensitive too + +I don't think you can build an aligned superintelligence from a culture this crazy + +it's not fair to expect ordinary people to understand the univariate fallacy before they're allowed to say "men aren't women" + +maybe S. was trying to talk about "legal fiction" categories, but I'm trying to talk about epistemology, and that's a probable reading when you say "categories" + +Hansonian mental illness that people should be less accomodating of + +there has to be a way to apply a finite amount of effort to _correct_ errors, and possibly even profit from correcting errors + +(I'm not making this up! I _couldn't_ make this up!) + +the frame in which I'm + +outside view hobby-horse + +standards + +cognitive dissonance + +smart people clearly know + +free-speech norms + +I'll be alright. Just not tonight. + +maybe 50,000 years from now we'll all be galatic superminds and laugh about all this + +(probably tell the story with no external links, only this-blog links) + +I'll be alright. Just not tonight. But someday. + +Avoiding politically-expensive topics is fine! Fabricating bad epistemology to support a politically-convenient position and then not retracting it after someone puts a lot of effort into explaining the problem is not OK. + +I guess this issue is that the mob thinks that arguments are soldiers and doesn't understand local validity, and if you're trying to appease the mob, you're not even allowed to do the local-validity "This is a bad argument, but the conclusion might be true for other reasons" thing? + + +I think a non-stupid reason is that the way I talk has actually been trained really hard on this subculture for ten years: most of my emails during this whole campaign have contained multiple Sequences or Slate Star Codex links that I can just expect people to have read. I can spontaneously use the phrase "Absolute Denial Macro" in conversation and expect to be understood. That's a massive "home field advantage." If I just give up on "rationalists" being as sane as we were in 2009 (when we knew that men can't become women by means of saying so), and go out in the world to make intellectual friends elsewhere (by making friends with Quillette readers or arbitrary University of Chicago graduates), then I lose all that accumulated capital. The language I speak is mostly educated American English, but I rely on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (so speaks the language of STEM intellectuals generally), and when I showed her "... To Make Predictions", she reported finding it somewhat hard to read, probably because I casually use phrases like "thus, an excellent motte", and expect to be understood without the reader taking 10 minutes to read the link. This essay, which was me writing from the heart in the words that came most naturally to me, could not be published in Quillette. The links and phraseology are just too context-bound. + +Berkeley "rationalists" are very good at free speech norms and deserve credit for that! But it still feels like a liberal church where you can say "I believe in evolution" without getting socially punished. Like, it's good that you can do that. But I had a sense that more is possible: a place where you can not just not-get-punished for being an evolutionist, but a place where you can say, "Wait! Given all this evidence for natural selection as the origin of design in the biological world, we don't need this 'God' hypothesis anymore. And now that we know that, we can work out whatever psychological needs we were trying to fulfil with this 'church' organization, and use that knowledge to design something that does an even better job at fulfilling those needs!" and have everyone just get it, at least on the meta level. + +I can accept a church community that disagrees on whether evolution is true. (Er, on the terms of this allegory.) I can accept a church community that disagrees on what the implications are conditional on the hypothesis that evolution is true. I cannot accept a church in which the canonical response to "Evolution is true! God isn't real!" is "Well, it depends on how you choose to draw the 'God' category boundary." I mean, I agree that words can be used in many ways, and that the answer to questions about God does depend on how the asker and answerer are choosing to draw the category boundary corresponding to the English language word 'God'. That observation can legitimately be part of the counterargument to "God isn't real!" But if the entire counterargument is just, "Well, it depends on how you define the word 'God', and a lot of people would be very sad if we defined 'God' in a way such that it turned out to not exist" ... unacceptable! Absolutely unacceptable! If this is the peak of publicly acceptable intellectual discourse in Berkeley, CA, and our AI alignment research group is based out of Berkeley (where they will inevitably be shaped by the local culture), and we can't even notice that there is a problem, then we're dead! We're just fucking dead! Right? Right?? I can't be the only one who sees this, am I? What is Toronto?????? + +_everyone else shot first_, and I'm _correct on the merits_ + +competence forcing conclusions: http://www.sl4.org/archive/0602/13903.html + +language as an AI capability + +https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions + +"the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value. [...] There are many such shards of desire, all different values." + +I wouldn't hold anyone to standards I wouldn't myself—for whatever that's worth http://zackmdavis.net/blog/2018/07/object-vs-meta-golden-rule/ + + + + +https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong + + +["It is a common misconception that you can define a word any way you like."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) + +["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) + +["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) + +[There's an idea, which you may have noticed I hate, that "you can define a word any way you like".](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) + +[And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying "I can define a word any way I like".](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) + +["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) + +["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) + +[I say all this, because the idea that "You can X any way you like" is a huge obstacle to learning how to X wisely. "It's a free country; I have a right to my own opinion" obstructs the art of finding truth. "I can define a word any way I like" obstructs the art of carving reality at its joints. And even the sensible-sounding "The labels we attach to words are arbitrary" obstructs awareness of compactness.](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) + + +"One may even consider the act of defining a word as a promise to this effect. Telling someone, "I define the word 'wiggin' to mean a person with green eyes and black hair", by Gricean implication, asserts that the word "wiggin" will somehow help you make inferences / shorten your messages. + +If green-eyes and black hair have no greater than default probability to be found together, nor does any other property occur at greater than default probability along with them, then the word "wiggin" is a lie: The word claims that certain people are worth distinguishing as a group, but they're not. + +In this case the word "wiggin" does not help describe reality more compactly—it is not defined by someone sending the shortest message—it has no role in the simplest explanation. Equivalently, the word "wiggin" will be of no help to you in doing any Bayesian inference. Even if you do not call the word a lie, it is surely an error." (https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) + +[And this suggests another—yes, yet another—reason to be suspicious of the claim that "you can define a word any way you like". When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power.](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index abe59fa..c20ec43 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -1,20 +1,26 @@ +Self-Identity Is a Schelling Point +Does General Intelligence Deflate Standardized Effect Sizes ...? +On the Argumentative Form "Super-proton Things Tend to Come in Varieties" +Terminology Gap: "Biological vs. Natal" (solution: "Developmental") +Reply to Ozymandias on Fully Consensual Gender +"I Tell Myself to Let the Story End"; Or, A Hill of Validity in Defense of Meaning -"A Love That Is Out of Anyone's Control" -Link: "Schelling Categories, and Simple Membership Tests" -_ Reply to Ozymandias on Fully Consensual Gender -"I Tell Myself to Let the Story End" -Link: Schelling Categories -Link: "The Univariate Fallacy" (lit-search strategy: go through the del Guidice "Multivariate Misgivings" article and its citations; the Archer paper also mentioned this) +Instrumental Categories, and War +A Science Fiction Story Idea I'm Not Skilled Enough to Write + +(lit-search strategy: go through the del Guidice "Multivariate Misgivings" article and its citations; the Archer paper also mentioned this) Phenotypic Identity and Memetic Capture +Link: "The Univariate Fallacy" + +Commentary on "Blegg Mode" -_ Mon: Commentary on "Blegg Mode" (https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode) -_ Tue: On the Argumentative Form "Super-proton Things Tend to Come in Varieties" -_ Terminology Gap: "Biological vs. Natal" (solution: "Developmental") +Faster Than Science (Transgender Edition) +(https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode) karaoke, celebrity, cognitive load; I don't want people to do that for me @@ -29,6 +35,7 @@ _ I Mean, Yes, I Agree That Man Should Allocate Some More Categories, But https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-have-about-five-words : well, that explains the TWAW mantra +The Parable of the Good Witch First-Offender Models and the Unilateralist's Blessing The Parable of the Faithful Man and the Cowardly Priest @@ -82,7 +89,7 @@ Laser 13 a big essay about Batesian mimickry -_ High-Dimensional Social Science and the Conjunction of Small Effect Sizes +_ "More Than We Can Say": High-Dimensional Social Science and ... _ Codes of Convergence; Or, Smile More _ "But I'm Not Quite Sure What That Means": Costs of Nonbinary Gender as a Social Technology _ "I Will Fight [...]": LGBT Patriotism and the Moral Fine-Tuning Objection @@ -119,7 +126,7 @@ Q The Gender Czar's Compromise "Love Like You" Blindspot Product Review: FaceApp -Faster Than Science (Transgender Edition) + Q Book Review: Nevada Product Review: Oculus Go diff --git a/notes/tweet_pad.txt b/notes/tweet_pad.txt index ff8972a..eca0983 100644 --- a/notes/tweet_pad.txt +++ b/notes/tweet_pad.txt @@ -8,6 +8,8 @@ Can I be forgiven if, from my perspective, it looks like the problem is with Ber where this "Everything is socially constructed! Nothing correlates with anything else!" performance is the price of being Good, I'm willing to be Bad if that means I can say that SOME things aren't socially constructed, and use language to refer to correlations less than one 5/5 +I would be less likely to listen to Bad Men saying factually correct things, if there were more non-Bad non-men saying factually correct things in the relevant areas of interest—guess I'll have to become one (except maybe not the "non-man" part because biological sex is immutable) + Then offer her a job at the salary of an 8x engineer. (Get it? It's an efficient markets joke: if companies really behaved like this, they would compete to equilibrium & there would be no pay gap in the 1st place. But real-world markets aren't efficient for many reasons 😰) You know who else believed that "biological sex" was a useful concept? Hitler! -- 2.17.1