+
[TODO—
* Jessica: scortched-earth campaign should mostly be in meatspace social reality
* my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
* secret posse member: level of social-justice talk makes me not want to interact with this post in any way
]
-On 4 July, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it.
+On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it.
[TODO: "AI Timelines Scam"
* I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions
In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
-In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft.
+In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to 10-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.)
In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. A function that faithfully passes observations it sees as input to another function, lets the second function constructing a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse (less accurate) probability distribution.
✓ New York [pt. 6]
✓ scuffle on "Yes Requires the Possibility" [pt. 4]
✓ "Unnatural Categories Are Optimized for Deception" [pt. 4]
+✓ Eliezerfic fight: will-to-Truth vs. will-to-happiness [pt. 6]
+- regrets, wasted time, conclusion [pt. 6]
- "Lesswrong.com is dead to me" [pt. 4]
+_ Eliezerfic fight: Ayn Rand and children's morals [pt. 6]
_ AI timelines scam [pt. 4]
_ secret thread with Ruby [pt. 4]
_ progress towards discussing the real thing [pt. 4]
_ epistemic defense meeting [pt. 4]
+_ Eliezerfic fight: Big Yud tests me [pt. 6]
+_ Eliezerfic fight: derail with lintamande [pt. 6]
+_ Eliezerfic fight: knives, and showing myself out [pt. 6]
_ reaction to Ziz [pt. 4]
_ confronting Olivia [pt. 2]
_ State of Steven [pt. 4]
_ Somni [pt. 4]
-_ rude maps [pt. 4]
_ culture off the rails; my warning points to Vaniver [pt. 4]
_ December 2019 winter blogging vacation [pt. 4]
_ plan to reach out to Rick [pt. 4]
_ the hill he wants to die on [pt. 6?]
_ recap of crimes, cont'd [pt. 6]
_ lead-in to Sept. 2021 Twitter altercation [pt. 6]
-_ regrets, wasted time, conclusion [pt. 6]
+
+bigger blocks—
+_ Dolphin War finish
+_ Michael Vassar and the Theory of Optimal Gossip
+_ psychiatric disaster
+_ the story of my Feb. 2017 Facebook crusade [pt. 2]
+_ the story of my Feb./Apr. 2017 recent madness [pt. 2]
not even blocked—
+_ A/a alumna consult? [pt. 2]
_ "Even our pollution is beneficial" [pt. 6]
_ Scott Aaronson on the blockchain of science [pt. 6]
_ Re: on legitimacy and the entrepreneur; or, continuing the attempt to spread my sociopathic awakening onto Scott [pt. 2 somewhere]
_ "EA" brand ate the "rationalism" brand—even visible in MIRI dialogues
_ Anna's heel–face turn
-bigger blocks—
-_ dath ilan and Eliezerfic fight
-_ Dolphin War finish
-_ Michael Vassar and the Theory of Optimal Gossip
-_ psychiatric disaster
-_ the story of my Feb. 2017 Facebook crusade [pt. 2]
-_ the story of my Feb./Apr. 2017 recent madness [pt. 2]
-
it was actually "wander onto the AGI mailing list wanting to build a really big semantic net" (https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists)
With internet available—
+_ space opera TVTrope?
+_ Word of God TVTropes page
+_ March 2017 Blanchard Tweeting my blog?
+_ bug emoji
+_ what was I replying to, re: "why you actually don't want to be a happier but less accurate predictor"?
_ Meta-Honesty critique well-received: cite 2019 review guide
_ https://www.greaterwrong.com/posts/2Ses9aB8jSDZtyRnW/duncan-sabien-on-moderating-lesswrong#comment-aoqWNe6aHcDiDh8dr
_ https://www.greaterwrong.com/posts/trvFowBfiKiYi7spb/open-thread-july-2019#comment-RYhKrKAxiQxY3FcHa
+_ relevant screenshots for Eliezerfic play-by-play
_ correct italics in quoted Eliezerfic back-and-forth
_ lc on elves and Sparashki
_ Nate would later admit that this was a mistake (or ask Jessica where)
_ Yudkowsky's LW moderation policy
far editing tier—
+_ maybe current-year LW would be better if more marginal cases _had_ bounced off because of e.g. sexism
+_ footnote to explain that when I'm summarizing a long Discord conversation to taste, I might move things around into "logical" time rather than "real time"; e.g. Yudkowsky's "powerfully relevant" and "but Superman" comments were actually one right after the other; and, e.g., I'm filling in more details that didn't make it into the chat, like innate kung fu
_ re "EY is a fraud": it's a _conditional_ that he can modus tollens if he wants
_ NRx point about HBD being more than IQ, ties in with how I think the focus on IQ is distasteful, but I have political incentives to bring it up
_ "arguing for a duty to self-censorship"—contrast to my "closing thoughts" email
terms to explain on first mention—
_ Civilization (context of dath ilan)
-_ Valinor
+_ Valinor (probably don't name it, actually)
_ "Caliphate"
_ "rationalist"
_ Center for Applied Rationality
_ MIRI
_ "egregore"
+_ eliezera
people to consult before publishing, for feedback or right of objection—
_ Katie (pseudonym choice)
_ Alicorn: about privacy, and for Melkor Glowfic reference link
_ hostile prereader (April, J. Beshir, Swimmer, someone else from Alicorner #drama)
-_ Kelsey (briefly)
+_ Kelsey
_ NRx Twitter bro
_ maybe SK (briefly about his name)? (the memoir might have the opposite problem (too long) from my hostile-shorthand Twitter snipes)
_ Megan (that poem could easily be about some other entomologist named Megan) ... I'm probably going to cut that §, though
_ David Xu? (Is it OK to name him in his LW account?)
+_ afford various medical procedures
marketing—
_ Twitter
* Maybe not? If "dignity" is a term of art for log-odds of survival, maybe self-censoring to maintain influence over what big state-backed corporations are doing is "dignified" in that sense
]
-The old vision was nine men in a brain in a box in a basement. (He didn't say _men_.)
+The old vision was nine men and a brain in a box in a basement. (He didn't say _men_.)
Subject: "I give up, I think" 28 January 2013
> You know, I'm starting to suspect I should just "assume" (choose actions conditional on the hypothesis that) that our species is "already" dead, and we're "mostly" just here because Friendly AI is humanly impossible and we're living in an unFriendly AI's ancestor simulation and/or some form of the anthropic doomsday argument goes through. This, because the only other alternatives I can think of right now are (A) arbitrarily rejecting some part of the "superintelligence is plausible and human values are arbitrary" thesis even though there seem to be extremely strong arguments for it, or (B) embracing a style of thought that caused me an unsustainable amount of emotional distress the other day: specifically, I lost most of a night's sleep being mildly terrified of "near-miss attempted Friendly AIs" that pay attention to humans but aren't actually nice, wondering under what conditions it would be appropriate to commit suicide in advance of being captured by one. Of course, the mere fact that I can't contemplate a hypothesis while remaining emotionally stable shouldn't make it less likely to be true out there in the real world, but in this kind of circumstance, one really must consider the outside view, which insists: "When a human with a history of mental illness invents a seemingly plausible argument in favor of suicide, it is far more likely that they've made a disastrous mistake somewhere, then that committing suicide is actually the right thing to do."
"content": "I'm afraid to even think that in the privacy of my own head, but I agree with you that is way more reasonable",
"type": "Generic"
-"but the ideological environment is such that a Harvard biologist/psychologist is afraid to notice blatantly obvious things in the privacy of her own thoughts, that's a really scary situation to be in (insofar as we want society's decisionmakers to be able to notice things so that they can make decisions)",
\ No newline at end of file
+"but the ideological environment is such that a Harvard biologist/psychologist is afraid to notice blatantly obvious things in the privacy of her own thoughts, that's a really scary situation to be in (insofar as we want society's decisionmakers to be able to notice things so that they can make decisions)",
+
+
+
+In October 2016, I messaged an alumna of my App Academy class of November 2013 (back when App Academy was still cool and let you sleep on the floor if you wanted), effectively asking to consult her expertise on feminism. "Maybe you don't want people like me in your bathroom for the same reason you're annoyed by men's behavior on trains?"
+