+
+
+depression-based forecasting in conversation with Carl
+> seems more ... optimistic, Kurzweilian?... to suppose that the tech gets used correctly the way a sane person would hope it would be used
+
+I like this sentence (from "The Matrix Is a System")—
+> If someone is a force on your epistemics towards the false, robustly to initial conditions and not as a fluke, that person is hostile.
+
+An analogy between my grievance against Yudkowsky and Duncan's grievance against me: I think Yudkowsky is obligated to search for and present "anti-trans" arguments in conjunction with searching for and presenting "pro-trans" arguments. Duncan (I'm wildly guessing??) thinks I'm obligated to search for and present "pro-Duncan" and addition to "anti-Duncan" arguments?? A key disanalogy: Yudkowsky is _afraid_ to post "anti-trans" content; I'm not afraid to post pro-Duncan content; I just think agreements are less interesting than disagreements. To prove the disanalogy, maybe I should write a "Things I Liked About 'Basics of Rationalist Discourse'" post as a peace offering??
+
+"Let's not talk to Eliezer." "He's sad and confusing" Commentary reference??
+
+https://equilibriabook.com/molochs-toolbox/
+
+> All of her fellow employees are vigorously maintaining to anybody outside the hospital itself, should the question arise, that Merrin has always cosplayed as a Sparashki while on duty, in fact nobody's ever seen her out of costume; sure it's a little odd, but lots of people are a little odd.
+>
+> (This is not considered a lie, in that it would be universally understood and expected that no one in this social circumstance would tell the truth.)
+
+I still had Sasha's sleep mask
+
+"Wilhelm" and Steven Kaas aren't Jewish, I think
+
+I agree that Earth is mired in random junk that caught on (like p-values), but ... so are the rats
+
+I'm https://www.lesswrong.com/posts/XvN2QQpKTuEzgkZHY/?commentId=f8Gour23gShoSyg8g at gender and categorization
+
+picking cherries from a cherry tree
+
+http://benjaminrosshoffman.com/honesty-and-perjury/#Intent_to_inform
+
+https://astralcodexten.substack.com/p/trying-again-on-fideism
+> I come back to this example less often, because it could get me in trouble, but when people do formal anonymous surveys of IQ scientists, they find that most of them believe different races have different IQs and that a substantial portion of the difference is genetic. I don’t think most New York Times readers would identify this as the scientific consensus. So either the surveys - which are pretty official and published in peer-reviewed journals - have managed to compellingly misrepresent expert consensus, or the impressions people get from the media have, or "expert consensus" is extremely variable and complicated and can’t be reflected by a single number or position.
+
+https://nickbostrom.com/astronomical/waste
+
+Michael Vassar has _also_ always been a very complicated person who's changed his emphases in ways Yudkowsky dislikes
+
+
+[TODO:
+Is this the hill _he_ wants to die on? If the world is ending either way, wouldn't it be more dignified for him to die _without_ Stalin's dick in his mouth?
+
+> The Kiritsugu shrugged. "When I have no reason left to do anything, I am someone who tells the truth."
+https://www.lesswrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8
+
+ * Maybe not? If "dignity" is a term of art for log-odds of survival, maybe self-censoring to maintain influence over what big state-backed corporations are doing is "dignified" in that sense
+]
+
+The old vision was nine men and a brain in a box in a basement. (He didn't say _men_.)
+
+Subject: "I give up, I think" 28 January 2013
+> You know, I'm starting to suspect I should just "assume" (choose actions conditional on the hypothesis that) that our species is "already" dead, and we're "mostly" just here because Friendly AI is humanly impossible and we're living in an unFriendly AI's ancestor simulation and/or some form of the anthropic doomsday argument goes through. This, because the only other alternatives I can think of right now are (A) arbitrarily rejecting some part of the "superintelligence is plausible and human values are arbitrary" thesis even though there seem to be extremely strong arguments for it, or (B) embracing a style of thought that caused me an unsustainable amount of emotional distress the other day: specifically, I lost most of a night's sleep being mildly terrified of "near-miss attempted Friendly AIs" that pay attention to humans but aren't actually nice, wondering under what conditions it would be appropriate to commit suicide in advance of being captured by one. Of course, the mere fact that I can't contemplate a hypothesis while remaining emotionally stable shouldn't make it less likely to be true out there in the real world, but in this kind of circumstance, one really must consider the outside view, which insists: "When a human with a history of mental illness invents a seemingly plausible argument in favor of suicide, it is far more likely that they've made a disastrous mistake somewhere, then that committing suicide is actually the right thing to do."
+
+
+[TODO—
+
+The human era wasn't going to last forever. Turing saw it in 1951. ("It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. [...] At some stage therefore we should have to expect the machines to take control[.]") _George Eliot_ [saw it in _1880_](http://www.online-literature.com/george_eliot/theophrastus-such/17/). ("Am I already in the shadow of the coming race? And will the creatures who are to transcend and supercede us be steely organisms, giving off the effluvia of the laboratory and performing with infallible exactness more than everything that we have performed with a slovenly approximativeness and self-defeating inaccuracy?")
+
+ * I've believed since Kurzweil that technology will remake the world sometime in the 21th century; it's just "the machines won't replace us, because we'll be them" doesn't seem credible
+
+list of lethalities
+
+ * I agree that it would be nice if Earth had a plan; it would be nice if people figured out the stuff Yudkowsky did earlier;
+
+Isaac Asimov wrote about robots in his fiction, and even the problem of alignment (in the form of his Three Laws of Robotics), and yet he still portrayed a future Galactic Empire populated by humans, which seems very silly.
+
+/2017/Jan/from-what-ive-tasted-of-desire/
+
+]
+
+> Similarly, a rationalist isn't just somebody who respects the Truth.
+> All too many people respect the Truth.
+> [...]
+> A rationalist is somebody who respects the _processes of finding truth_.
+https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms
+
+> Why is school like a boner?
+> It’s long and hard unless you're Asian.
+
+Robert Heinlein
+> “What are the facts? Again and again and again – what are the facts? Shun wishful thinking, ignore divine revelation, forget what “the stars foretell,” avoid opinion, care not what the neighbors think, never mind the unguessable “verdict of history” – what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts!”
+https://www.goodreads.com/quotes/38764-what-are-the-facts-again-and-again-and-again
+
+
+ "sender_name": "Zack M. Davis",
+ "timestamp_ms":
+ "content": "at this point, I actually am just starting to hate trans women by default (the visible kind, not the androphilic early-transitioning kind); the \"indulging a mental illness that makes them want to become women\" model is waaaaay more accurate than the standard story, and the people who actually transition are incentivized/selected for self-delusion, which is really unfair to the people who aren't delusional about it",
+ "type": "Generic"
+ },
+ "sender_name":
+ "timestamp_ms": [Sat Jan 21 10:06:17 PST 2017]
+ "content": "I'm afraid to even think that in the privacy of my own head, but I agree with you that is way more reasonable",
+ "type": "Generic"
+
+"but the ideological environment is such that a Harvard biologist/psychologist is afraid to notice blatantly obvious things in the privacy of her own thoughts, that's a really scary situation to be in (insofar as we want society's decisionmakers to be able to notice things so that they can make decisions)",
+
+In October 2016,
+
+
+if [...] wrote her own 10,600 draft Document explaining why she thought [...] is actually a girl, that would be really interesting!—but rather that no one else seemed _interested in having a theory_, as opposed to leaping to institute a social convention that, when challenged, is claimed to have no particular consequences and no particular objective truth conditions, even though it's not clear why there would be moral urgency to implement this convention if it weren't for its consequences.
+
+https://twitter.com/ESYudkowsky/status/1634338145016909824 re "malinformation"
+> If we don't have the concept of an attack performed by selectively reporting true information - or, less pleasantly, an attack on the predictable misinferences of people we think less rational than ourselves - the only socially acceptable counter is to say the info is false.
+
+Blanchard Tweets my blog in Feb and March 2017
+https://twitter.com/BlanchardPhD/status/830580552562524160
+https://twitter.com/BlanchardPhD/status/837846616937750528
+
+
+I said that I couldn't help but be reminded of a really great short story that I remembered reading back in—it must have been 'aught-nine. I thought it was called "Darkness and Light", or something like that. It was about a guy who gets transported to a fantasy world where he has a magic axe that yells at him sometimes, and he's prophecied to defeat the bad guy, and he and his allies have to defeat these ogres to reach the bad guy's lair. And when they get there, the bad guy _accuses them of murder_ for killing the ogres on the way there.
+
+(The story was actually Yudkowsky's ["The Sword of Good"](https://www.yudkowsky.net/other/fiction/the-sword-of-good), but I was still enjoying the "Robin Hanson's blog" æsthetic.)
+
+And the moral was—or at least, the moral _I_ got out of it was—there's something messed-up about the way fiction readers just naïvely accept the author's frame, instead of looking at the portrayed world with fresh eyes and applying their _own_ reason and their _own_ morality to it.
+
+need to fit this in somewhere—
+"Gee, I wonder why women-who-happen-to-be-trans are so much more likely to read Slate Star Codex, and be attracted to women, and, um, have penises, than women-who-happen-to-be-cis?"
+
+Everyone believed this in 2005! Everyone _still_ believes this!
+
+
+> Dear Totally Excellent Rationalist Friends:
+> As a transhumanist and someone with a long, long history of fantasizing about having the property, I am of course strongly in favor of there being social norms and institutions that are carefully designed to help people achieve their lifelong dream of acquiring the property, or rather, the best approximation thereof that is achievable given the marked limitations of existing technology.
+> However, it's also kind of important to notice that fantasizing about having the property without having yet sought out interventions to acquire the property, is not the same thing as somehow already literally having the property in some unspecified metaphysical sense! The process of attempting to acquire the property does not propagate backwards in time!
+> This is not an advanced rationality skill! This is the "distinguishing fantasy from reality" skill! I realize that explaining this in clear language has the potential to hurt some people's feelings! Unfortunately, as an aspiring epistemic rationalist (epistemic rationality is the only kind of rationality; "instrumental rationality" is a phrase someone made up in order to make themselves feel better about lying), I have a GODDAMNED MORAL RESPONSIBILITY to hurt that person's feelings!
+> People should get what they want. We should have social norms that are carefully designed to help people get what they want. Unfortunately, helping people get the things that they want is a hard problem, because people are complicated and the world is complicated. That's why, when renegotiating social norms to apply to a historically unprecedented situation, it's important to have a meta-norm of not socially punishing people for clearly describing a hypothesis about the nature of the problem people are trying to solve, even if the hypothesis hurts someone's feelings, and even if there would probably be genuinely bad consequences if the hypothesis were to be believed by the masses of ordinary dumb people who hate our guts anyway.
+> I'm proud of my history of fantasizing about having the property, and I'm proud of my rationalist community, and I don't want either of them taken over by CRAZY PEOPLE WHO THINK THEY CAN EDIT THE PAST.
+(170 comments)
+
+
+> So, unfortunately, I never got very far in the _Daphne Koller and the Methods of Rationality_ book (yet! growth m—splat, AUGH), but one thing I do remember is that many different Bayesian networks can represent the same probability distribution. And the reason I've been running around yelling at everyone for nine months is that I've been talking to people, and we _agree_ on the observations that need to be explained, and yet we explain them in completely different ways. And I'm like, "My network has SO MANY FEWER ARROWS than your network!" And they're like, "Huh? What's wrong with you? Your network isn't any better than the standard-issue network. Why do you care so much about this completely arbitrary property 'number of arrows'? Categories were made for the man, not man for the categories!" And I'm like, "Look, I didn't get far enough in the _Daphne Koller and the Methods of Rationality_ book to understand why, but I'm PRETTY GODDAMNED SURE that HAVING FEWER ARROWS MAKES YOU MORE POWERFUL. YOU DELUSIONAL BASTARDS! HOW CAN YOU POSSIBLY GET THIS WRONG please don't hurt me Oh God please don't hurt me I'm sorry I'm sorry."
+
+> The truthful and mean version: _The Man Who Would Be Queen_, Ch. 9
+> The truthful and nice version: "Becoming What We Love" [http://annelawrence.com/becoming_what_we_love.pdf](http://annelawrence.com/becoming_what_we_love.pdf)
+> The technically-not-lying version: [http://www.avitale.com/developmentalreview.htm](http://www.avitale.com/developmentalreview.htm)
+> The long version: [https://sillyolme.wordpress.com/](https://sillyolme.wordpress.com/)
+(180 comments)
+
+the other week, "Chaya" had put up a matchmaking thread on her Facebook wall, hoping to connect friends of hers looking for new romantic partners, and also reminding people about _reciprocity.io_, a site someone in the community had set up to match people to date or hang out with. Brent Dill had commented that _reciprocity.io_ had been useless, and I said (on 7 February) that the hang-out matching had been valuable to me, even if the romantic matching was useless for insufficiently high-status males.
+
+matchmaking thread (thread was 4 February, relevant comments were 7 February): https://www.facebook.com/Katie.Cohen821/posts/pfbid02PNKKSCBTC99ULzPsueKvZkYmpNvELrkEfGymcrAfWZPu39LRCyh2bE4a9Ht3yg3Dl
+
+
+Sat Feb 11 12:49:33 PST 2017
+just like it's possible to identify as a woman despite not having unusually many female-typical traits, it's also possible to identify as a liberal despite not having unusually many liberal-typical beliefs
+
+
+
+ "sender_name": "Zack M. Davis",
+ "timestamp_ms": 1530601286979,
+ "content": "and am continually haunted by the suspicion that the conjunction of my biological
+sex and my highly refined taste for bullet-biting, may not be a coincidence",
+ "type": "Generic"
+ },
+ {
+ "sender_name": "Zack M. Davis",
+ "timestamp_ms": 1530601211347,
+ "content": "I always want to fantasize that if I were a woman, I would have the strength to bi
+te the bullet, \"Yes, we masculine-of-center women are forming the coalition to petition for better
+treatment by Society, while acknowleding that there are systematic evolutionary reasons why Society
+is this way currently\"",
+ "type": "Generic"
+ },
+ {
+ "sender_name": "Zack M. Davis",
+ "timestamp_ms": 1530601116141,
+ "content": "there was a NRx whose take [...] was so disagreeable that he got downvoted into oblivion on /r/slatestarcodex (and remember, /r/slatestarcodex is already pretty right-wing by San Francisco standards) who also had a post on the \"feminism appeals to masculine-of-center women\" hypothesis http://www.ericwulff.com/blog/?p=1861 which deserves more credit than it gets (it acknowledges within-group variation!)",
+ "type": "Generic"
+
+
+----
+
+He doesn't want to talk about pivotal acts because because anything he says in public might be misinterpreted—but, how does the disutility of being misinterpreted plausibly outweigh the utility of someone having something useful to say?? I feel like his modern answer is some variation of "Everyone but me is retarded", but—that's not what he thought about decision theory back in 2010, when he validated/honored Wei Dai and Vladimir Nesov's contributions! (find cite) And now, he says he wish he hadn't talked about decision theory ...