From f2f396f01d2e5ae4dc79c1a1be4d35f5791e9646 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sat, 7 Dec 2019 07:31:33 -0800 Subject: [PATCH] last Friday night --- notes/i-tell-myself-notes.txt | 18 ++++++++---------- notes/i-tell-myself-sections.md | 26 ++++++++++++++++++++++++-- 2 files changed, 32 insertions(+), 12 deletions(-) diff --git a/notes/i-tell-myself-notes.txt b/notes/i-tell-myself-notes.txt index 73b6123..d53ddf6 100644 --- a/notes/i-tell-myself-notes.txt +++ b/notes/i-tell-myself-notes.txt @@ -368,16 +368,6 @@ http://zackmdavis.net/blog/2016/07/concerns/ http://zackmdavis.net/blog/2016/09/concerns-ii/ "you yourself admit that your model won't assign literally all of its probability mass to the exact outcome?!" -"category boundaries" were just a visual metaphor for talking about beliefs? There's a little naive Bayes model in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that I can use observations to update my beliefs about category-membership, and use my beliefs about category-membership to predict observations. The set of things I'll classify as a blegg with probability greater than p is conveniently visualized as an area with a boundary in blueness–eggness space, but the beliefs are the important thing. - -The "borders" metaphor is particularly galling if—[unlike the popular author](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/)—you actually know math. - -https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models - - - - - "Don't Revere the Bearer of Good Info" https://www.lesswrong.com/posts/tSgcorrgBnrCH8nL3/don-t-revere-the-bearer-of-good-info casuistry @@ -545,3 +535,11 @@ Steve said it's funny that Ziz and I are ragequitting over opposite things, and https://www.lesswrong.com/posts/KmghfjH6RgXvoKruJ/hand-vs-fingers + +I don't _care_ if the blatantly-misleading statements were carefully worded to permit a true interpretation such that they're not technically "lying." + +This situation is _fucked_. I don't care whose "fault" it is. I don't want to "blame" anyone. But as the first step to making things less fucked, I need to _write about the world I see_—and you are, still, a pretty prominent part of my mental universe. + +Katie Herzog and Jesse Singal + +Heinlein diff --git a/notes/i-tell-myself-sections.md b/notes/i-tell-myself-sections.md index b80e257..5a0ec21 100644 --- a/notes/i-tell-myself-sections.md +++ b/notes/i-tell-myself-sections.md @@ -1,4 +1,4 @@ -(I typically eschew the use of boldface in prose, but as a strategic concession to people's lack of reading comprehension, I'll be bolding key sentences that I fear people would otherwise fail to process and misunderstand my position as a result.) +**(I typically eschew the use of boldface in prose, but will be bolding key sentences in this post as a strategic concession to people's lack of reading comprehension.)** --- @@ -50,9 +50,14 @@ If it cost $200K, I would just take out a bank loan and _do it_. Not because I like my voice, but because +[trans woman one word] +[maybe it woudl be a good idea ten years ago] + ----- -I definitely don't want to call my friend Irene a man. That would be crazy! Because **her transition _actually worked_.** Because it actually worked _on the merits_. _Not_ because I'm _redefining concepts in order to be nice to her_. When I look at her, whatever algorithm my brain uses to sort people into "woman"/"man"/"not sure" buckets, returns "woman." +I definitely don't want to call (say) my friend Irene a man. That would be crazy! Because **her transition _actually worked_.** Because it actually worked _on the merits_. _Not_ because I'm _redefining concepts in order to be nice to her_. When I look at her, whatever algorithm my brain _ordinarily_ uses to sort people into "woman"/"man"/"not sure" buckets, returns "woman." + +**If it looks like a duck, and it quacks like a duck, and you can model it as a duck without making any grevious prediction errors, then it makes sense to call it a "duck" in the range of circumstances that your model continues to be useful**, even if a pedant might point out that it's really an [Anatid](https://en.wikipedia.org/wiki/Anatidae)-[oid](https://en.wiktionary.org/wiki/-oid#Suffix) robot, or that that species is technically a goose. ----- @@ -172,11 +177,28 @@ The Popular Author "People started threatening to use my bad reputation to discredit the communities I was in and the causes I cared about most." +[lightning post assumes invicibility] + ---- +The "national borders" metaphor is particularly galling if—[unlike the popular author](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/)—you _actually know the math_. + +If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, + +"Category boundaries" were just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. + + [wireheading and war are the only two reasons to] ----- [psychological unity of humankind and sex] https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind + +---- + +[ppl don't click links—quick case for AGP—80% is not 100, but] + +----- + +["delusional perverts", no one understands me] -- 2.17.1