X-Git-Url: http://unremediatedgender.space/source?a=blobdiff_plain;f=notes%2Fa-hill-of-validity-sections.md;h=64b1680e63e8645136588b343e4e5962f51c6140;hb=48e72f58d89b7b152bed0ab0b14d9beeb59d0547;hp=1c5ec5386916ea8fbe2badb2c34652a7aa5501f3;hpb=ae3c8205a2024411860995bee1794f25f812f334;p=Ultimately_Untrue_Thought.git diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 1c5ec53..64b1680 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,15 +1,12 @@ -on deck— -_ Let's recap -_ If he's reading this ... -_ Perhaps if the world were at stake -_ ¶ about social justice and defying threats -_ ¶ about body odors -_ regrets and wasted time -_ talk about the 2019 Christmas party -_ excerpt 2nd "out of patience" email - - with internet available— +_ https://www.lesswrong.com/posts/QB9eXzzQWBhq9YuB8/rationalizing-and-sitting-bolt-upright-in-alarm#YQBvyWeKT8eSxPCmz +_ Ben on "community": http://benjaminrosshoffman.com/on-purpose-alone/ +_ check date of univariate fallacy Tweet and Kelsey Facebook comment +_ Soares "excited" example +_ EA Has a Lying Problem +_ when did I ask Leon about getting easier tasks? +_ Facebook discussion with Kelsey (from starred email)?: https://www.facebook.com/nd/?julia.galef%2Fposts%2F10104430038893342&comment_id=10104430782029092&aref=1557304488202043&medium=email&mid=5885beb3d8c69G26974e56G5885c34d38f3bGb7&bcode=2.1557308093.AbzpoBOc8mafOpo2G9A&n_m=main%40zackmdavis.net +_ "had previously written a lot about problems with Effective Altruism": link to all of Ben's posts _ Sarah Barellies cover links _ "watchful waiting" _ Atlantic article on "My Son Wears Dresses" https://archive.is/FJNII @@ -30,10 +27,13 @@ _ refusing to give a probability (When Not to Use Probabilities? Shut Up and Do _ retrieve comment on pseudo-lies post in which he says its OK for me to comment even though + far editing tier— +_ tie off Anna's plot arc? _ quote one more "Hill of Meaning" Tweet emphasizing fact/policy distinction _ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ context of his claim to not be taking a stand +_ clarify "Merlin didn't like Vassar" example about Mike's name _ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing _ smoother transition between "deliberately ambiguous" and "was playing dumb"; I'm not being paranoid for attributing political motives to him, because he told us that he's doing it _ when I'm too close to verbatim-quoting someone's email, actually use a verbatim quote and put it in quotes @@ -1247,11 +1247,11 @@ Yudkowsky did [quote-Tweet Colin Wright on the univariate fallacy](https://twitt "Univariate fallacy" also a concession (which I got to cite in which I cited in "Schelling Categories") -https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ + "Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019 -scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk +scuffle on LessWrong FAQ 31 May "epistemic defense" meeting @@ -1291,6 +1291,19 @@ I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making ] -[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing -15 Sep Glen Weyl apology -] + +Scott said he liked "monastic rationalism _vs_. lay rationalism" as a frame for the schism Ben was proposing. + +(I wish I could use this line) +I really really want to maintain my friendship with Anna despite the fact that we're de facto political enemies now. (And similarly with, e.g., Kelsey, who is like a sister-in-law to me (because she's Merlin Blume's third parent, and I'm Merlin's crazy racist uncle).) + + +https://twitter.com/esyudkowsky/status/1164332124712738821 +> I unfortunately have had a policy for over a decade of not putting numbers on a few things, one of which is AGI timelines and one of which is *non-relative* doom probabilities. Among the reasons is that my estimates of those have been extremely unstable. + + +I don't, actually, know how to prevent the world from ending. Probably we were never going to survive. (The cis-human era of Earth-originating intelligent life wasn't going to last forever, and it's hard to exert detailed control over what comes next.) But if we're going to die either way, I think it would be _more dignified_ if Eliezer Yudkowsky were to behave as if he wanted his faithful students to be informed. Since it doesn't look like we're going to get that, I think it's _more dignified_ if his faithful students _know_ that he's not behaving like he wants us to be informed. And so one of my goals in telling you this long story about how I spent (wasted?) the last six years of my life, is to communicate the moral that + +and that this is a _problem_ for the future of humanity, to the extent that there is a future of humanity. + +Is that a mean thing to say about someone to whom I owe so much? Probably. But he didn't create me to not say mean things. If it helps—as far as _I_ can tell, I'm only doing what he taught me to do in 2007–9: [carve reality at the joints](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), [speak the truth even if your voice trembles](https://www.lesswrong.com/posts/pZSpbxPrftSndTdSf/honesty-beyond-internal-truth), and [make an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) when you've got [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect).