From ed530f107e1e6c22524262bc0e44692a855333c9 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Tue, 28 Feb 2023 12:26:00 -0800 Subject: [PATCH] check in --- ...mments-on-the-conspiracies-of-dath-ilan.md | 3 +- .../if-clarity-seems-like-death-to-them.md | 4 +-- content/drafts/standing-under-the-same-sky.md | 16 ++++----- notes/memoir-sections.md | 33 +++++++++---------- notes/memoir_wordcounts.csv | 5 ++- notes/post_ideas.txt | 3 +- notes/tech_tasks.txt | 1 - 7 files changed, 33 insertions(+), 32 deletions(-) diff --git a/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md b/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md index 4f02ebb..cb2a17b 100644 --- a/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md +++ b/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md @@ -9,12 +9,11 @@ Status: draft (I was tempted to tag that as "epistemic status: low-confidence speculation", but that's _frequentist_ thinking—as if "Jews and gentiles are equally sneaky" were a "null hypothesis" that could only be rejected by data that would be sufficiently unlikely assuming that the null was true. Ha ha, that would be _crazy!_ Obviously, I should have a _prior_ on the effect size difference between the Jew and gentile sneakiness distributions, that can be updated as sneakiness data comes in. I think the mean of my prior distribution is at, like, _d_ ≈ 0.1? So it's not "low confidence"; it's "low confidence of the effect size being large enough to be of much practical significance".) - For context on why I have no sense of humor about this, on Earth (which _actually exists_, unlike dath ilan), when someone says "it's not lying, because no one _expected_ me to tell the truth in that situation", what's usually going on (as Zvi Mowshowitz explains: ) is that is that conspirators benefit from deceiving outsiders, and the claim that "everyone knows" is them lying to _themselves_ about the fact that they're lying. (If _you_ got hurt by not knowing, well, it's not like anyone got hurt, because if you didn't know, then you weren't anyone.) -It's very striking to me that one of the corrupt executives in _Moral Mazes_ uses very similar language to the narrator of the Merrin thread: "We lie all the time, but if everyone knows that we're lying, is a lie really a lie?" + Okay, but if it were _actually true_ that everyone knew, what would be _function_ of saying the false thing? On dath ilan (if not in Earth boardrooms), I suppose the answer is "Because it's fun"? Okay, but what is the function of your brain giving out a "fun" reward in this context? It seems like at _some_ point, there has to be the expectation of _some_ cognitive system (although possibly not an entire "person") taking the signals literally. diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index dd4ba92..eceb349 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -29,7 +29,7 @@ I believed that there _was_ a real problem, but didn't feel like I had a good gr Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it was that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. -When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." +When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate [had privately clarified that](https://twitter.com/jessi_cata/status/1462454555925434375) the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise." @@ -464,7 +464,7 @@ I appreciated the gesture of getting real data, but I was deeply unimpressed wit Briefly, based on eyballing the survey data, Alexander proposes "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory, but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is _not_ boring and actually takes on a big complexity penalty: I just don't think the group of gay men _and_ lesbians _and_ straight males with female gender identities _and_ straight females with male gender identities have much in common with each other, except sociologically (being "queer"), and by being human. -(I do like the hypernym _autogenderphilia_.) +(I do like the [hypernym](https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy) _autogenderphilia_.) ------- diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index 771d987..dc00f1d 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -184,7 +184,7 @@ The visit on the seventeenth went fine. I hung out, talked, played with the kids At dinner, there was a moment when Koios bit into a lemon and made a funny face, to which a bunch of the grown-ups said "Awww!" A few moments later, he went for the lemon again. Alicorn speculated that Koios had noticed that the grown-ups found it cute the first time, and the grown-ups were chastened. "Aww, baby, we love you even if you don't bite the lemon." -It was very striking to me how, in the case of the baby biting a lemon, Alicorn _immediately_ formulated the hypothesis that what-the-grownups-thought-was-cute was affecting the baby's behavior, and everyone _immediately just got it_. I was tempted to say something caustic about how no one seemed to think a similar mechanism could have accounted for some of the older child's verbal behavior the previous year, but I kept silent; that was clearly outside the pervue of my double-dog promise. +It was very striking to me how, in the case of the baby biting a lemon, Alicorn _immediately_ formulated the hypothesis that what-the-grownups-thought-was-cute was affecting the baby's behavior, and everyone _immediately just got it_. I was tempted to say something caustic about how no one seemed to think a similar mechanism could have accounted for some of the older child's verbal behavior the previous year, but I kept silent; that was clearly outside the purview of my double-dog promise. There was another moment when Mike made a remark about how weekends are socially constructed. I had a lot of genuinely on-topic cached witty philosophy banter about [how the social construction of concepts works](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), that would have been completely innocuous if anyone _else_ had said it, but I kept silent because I wasn't sure if it was within my double-dog margin of error if _I_ said it. @@ -400,7 +400,7 @@ It was nice—an opportunity to talk to someone who I otherwise wouldn't get to What do I mean by "someone like her"? Definitely not race _per se_. Rather ... non-nerds?—normies. I know how to talk to _the kinds of women I meet in "rationalist"/EA circles_, and even (very rarely) ask them on a date.[^romantic-poem] That doesn't feel fake, because they're just peers who happen to be female. (I may have renounced [the ideological psychological sex difference denialism of my youth](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), but I'm not _sexist_.) -What I don't know how to do without the moral indulgence of money changing hands is to ask out a beautiful woman _because she's a beautiful woman_. I won't say it's morally wrong, exactly; it's just not how I was raised. (I mean, I wasn't raised to hire escorts, either, but somehow the transactionality of it puts it outside some of the ethical constraints of ordinary courtship.) +What I don't know how to do without the moral [indulgence](https://en.wikipedia.org/wiki/Indulgence) of money changing hands is to ask out a beautiful woman _because she's a beautiful woman_. I won't say it's morally wrong, exactly; it's just not how I was raised. (I mean, I wasn't raised to hire escorts, either, but somehow the transactionality of it puts it outside some of the ethical constraints of ordinary courtship.) [^romantic-poem]: Though the meter is occasionally a little bit bizarre, I'm very proud of [the poem I wrote in 2016 depicting a woman I was interested in eradicating malaria by wiping out all the mosquitos using CRISPR gene drive](/ancillary/megan-and-the-anopheles-gambiae/), although our one date didn't amount to anything. She later married Scott Alexander. @@ -428,7 +428,7 @@ I visited Ben and met his new girlfiend. Jessica wasn't around. We hadn't talked I got to meet my neoreactionary Twitter mutual, who held the distinction of having been banned from the _Slate Star Codex_ comment section ["for reasons of total personal caprice"](https://archive.md/sRfBj#selection-1633.27-1633.64). I wore my _Quillette_ T-shirt. He offered to buy me a drink. I said I didn't drink, but he insisted that getting drunk was the ritual by which men established trust. I couldn't argue with that, and ended up having a glass and a half of wine while we talked for a couple hours. -So much of my intellectual life for the past five years had been shaped by the fight to keep mere heresies on the shared map, that it was a nice change to talk to an out-and-out _apostate_, with whom none of none of my ingrained defensive motions were necessary. (I just want to restore the moral spirit of 2008 liberalism but with better epistemology; he wants to bring back _coveture_ on the grounds that women of the Eurasian subspecies of humanity haven't exercised mate choice in 10,000 years and aren't being helped by starting now.) There was one moment when I referred to the rationalists as my guys, and instinctively disclaimed, "and of course, we're mostly guys." He pointed out that I didn't need to tell _him_ that. +So much of my intellectual life for the past five years had been shaped by the fight to keep mere heresies on the shared map, that it was a nice change to talk to an out-and-out _apostate_, with whom none of none of my ingrained defensive motions were necessary. (I just want to restore the moral spirit of 2008 liberalism but with better epistemology; he wants to bring back [_coverture_](https://en.wikipedia.org/wiki/Coverture) on the grounds that women of the Eurasian subspecies of humanity haven't exercised mate choice in 10,000 years and aren't being helped by starting now.) There was one moment when I referred to the rationalists as my guys, and instinctively disclaimed, "and of course, we're mostly guys." He pointed out that I didn't need to tell _him_ that. ------ @@ -468,7 +468,7 @@ Was Michael using me, at various times? I mean, probably. But just as much, _I w I _did_, I admitted, have some specific, nuanced concerns—especially since the December 2020 psychiatric disaster, with some nagging doubts beforehand—about ways in which being an inner-circle "Vassarite" might be bad for someone, but at the moment, I was focused on rebutting Scott's story, which was _silly_. A defense lawyer has an easier job than a rationalist—if the prosecution makes a terrible case, you can just destroy it, without it being your job to worry about whether your client is separately guilty of vaguely similar crimes that the incompetent prosecution can't prove. -When Scott expressed concern about the group-yelling behavior that [Ziz had described in a blog comment](https://sinceriously.fyi/punching-evil/#comment-2345) and [Yudkowsky had described on Twitter](https://twitter.com/ESYudkowsky/status/1356494768960798720), I clarified that that thing was very different from what it was like to actually be friends with them. The everyone-yelling operation seemed like a new innovation (that I didn't like) that they wield as a psychological weapon only against people who they think are operating in bad faith? In the present conversation with Scott, I had been focusing on rebutting the claim that my February–April 2017 (major) and March 2019 (minor) psych problems were caused by the "Vassarites", because with regard to those _specific_ incidents, the charge was absurd and false. But, well ... my January 2021 (minor) psych problems actually _were_ the result of being on the receiving end of the everyone-yelling thing. I briefly described the December 2020 "Lenore" disaster, and in particular the part where Michael/Jessica/Jack yelled at me. +When Scott expressed concern about the group-yelling behavior that [Ziz had described in a blog comment](https://sinceriously.fyi/punching-evil/#comment-2345) ("They spent 8 hours shouting at me, gaslighting me") and [Yudkowsky had described on Twitter](https://twitter.com/ESYudkowsky/status/1356494768960798720) ("When MichaelV and co. try to run a 'multiple people yelling at you' operation on me, I experience that as 'lol, look at all that pressure' instead _feeling pressured_"), I clarified that that thing was very different from what it was like to actually be friends with them. The everyone-yelling operation seemed like a new innovation (that I didn't like) that they wield as a psychological weapon only against people who they think are operating in bad faith? In the present conversation with Scott, I had been focusing on rebutting the claim that my February–April 2017 (major) and March 2019 (minor) psych problems were caused by the "Vassarites", because with regard to those _specific_ incidents, the charge was absurd and false. But, well ... my January 2021 (minor) psych problems actually _were_ the result of being on the receiving end of the everyone-yelling thing. I briefly described the December 2020 "Lenore" disaster, and in particular the part where Michael/Jessica/Jack yelled at me. Scott said that based on my and others' testimony, he was updating away from Vassar being as involved in psychotic breaks than he thought, but towards thinking Vassar was worse in other ways than he thought. He felt sorry for my bad December 2020/January 2021 experience—so much that he could feel it through the triumphant vindication at getting conifrmation that the Vassarites were behaving badly in ways he couldn't previously prove. @@ -496,13 +496,13 @@ I still had more things to say—a reply to the February 2021 post on pronoun re Leaving a personality cult is hard. As I struggled to write, I noticed that I was wasting a lot of cycles worrying about what he'd think of me, rather than saying the things I needed to say. I knew it was pathetic that my religion was so bottlenecked on _one guy_—particularly since the holy texts themselves (written by that one guy) [explicitly said not to do that](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus)—but unwinding those psychological patterns was still a challenge. -An illustration of the psychological dynamics at play: on an EA Forum post about demandingness objections to longtermism, Yudkowsky [commented that](https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY) he was "broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism [...] as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seem[ed] to be the main alternative." +An illustration of the psychological dynamics at play: on an August 2021 EA Forum post about demandingness objections to longtermism, Yudkowsky [commented that](https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY) he was "broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism [...] as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seem[ed] to be the main alternative." I found the comment reassuring regarding the extent or lack thereof of my own contributions to the great common task—and that's the problem: I found the _comment_ reassuring, not the _argument_. It would make sense to be reassured by the claim (if true) that human psychology is such that I don't realistically have the option of devoting more than 25% of myself to the great common task. It does _not_ make sense to be reassured that _Eliezer Yudkowsky said he's broadly fine with it_. That's just being a personality-cultist. In January 2022, in an attempt to deal with my personality-cultist writing block, I sent him one last email asking if he particularly _cared_ if I published a couple blog posts that said some negative things about him. If he actually _cared_ about potential reputational damage to him from my writing things that I thought I had a legitimate interest in writing about, I would be _willing_ to let him pre-read the drafts before publishing and give him the chance to object to anything he thought was unfair ... but I'd rather agree that that wasn't necessary. I explained the privacy norms that I intended to follow—that I could explain _my_ actions, but had to Glomarize about the content of any private conversations that may or may not have occurred. -It had taken me a while (with apologies for my atrocious sample-efficiency), but I was finally ready to give up on him; I thought the efficient outcome was that I should just tell my Whole Dumb Story on my blog and never bother him again. Since he probably _didn't_ particularly care (because it's not AGI alignment and therefore unimportant) and it would be psychologically easier on me if I knew he diidn't hold it against me, could I please have his advance blessing to just write and publish what I was thinking so I can get it all out of my system and move on with my life? +It had taken me a while (with apologies for my atrocious [sample efficiency](https://ai.stackexchange.com/a/5247)), but I was finally ready to give up on him; I thought the efficient outcome was that I should just tell my Whole Dumb Story on my blog and never bother him again. Since he probably _didn't_ particularly care (because it's not AGI alignment and therefore unimportant) and it would be psychologically easier on me if I knew he diidn't hold it against me, could I please have his advance blessing to just write and publish what I was thinking so I can get it all out of my system and move on with my life? If it helped—as far as _I_ could tell, I was only doing what _he_ taught me to do in 2007–2009: [carve reality at the joints](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), [speak the truth even if your voice trembles](https://www.lesswrong.com/posts/pZSpbxPrftSndTdSf/honesty-beyond-internal-truth), and [make an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) when you've got [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect) (Subject: "blessing to speak freely, and privacy norms?"). @@ -596,7 +596,7 @@ In the same story, Merrin is dressed up as a member of a fictional alien species It's in-character for Merrin to go along with it, because she's a pushover. My question is, why is it okay that Exception Handling has a Fake Conspiracies section (!), any more than it would have been if FTX or Enron explicitly had a Fake Accounting department? -(Is it because dath ilan are the designated good guys? Well, so was FTX.) +(Is it because dath ilan are the [designated good guys](https://tvtropes.org/pmwiki/pmwiki.php/Main/DesignatedHero)? Well, [so was FTX](https://forum.effectivealtruism.org/posts/sdjcH7KAxgB328RAb/ftx-ea-fellowships).) As another notable example of dath ilan hiding information for the alleged greater good, in Golarion, Keltham discovers that he's a sexual sadist, and deduces that Civilization has deliberately prevented him from realizing this, because there aren't enough corresponding masochists to go around in dath ilan. Having concepts for "sadism" and "masochism" as variations in human psychology would make sadists like Keltham sad about the desirable sexual experiences they'll never get to have, so Civilization arranges for them to _not be exposed to knowledge that would make them sad, because it would make them sad_ (!!). @@ -639,7 +639,7 @@ Someone else said: > dath ilan is essentially a paradise world. In a paradise world, people have the slack to make microoptimisations like that, to allow themselves Noble Lies and not fear for what could be hiding in the gaps. Telling the truth is a heuristic for this world where Noble Lies are often less Noble than expected and trust is harder to come by. -I said that I thought people were missing this idea that the reason "truth is better than lies; knowledge is better than ignorance" is such a well-performing injunction in the real world (despite the fact that there's no law of physics preventing lies and ignorance from having beneficial consequences), is because it protects against unknown unknowns. Of course an author who wants to portray an ignorance-maintaining conspiracy as being for the greater good, can assert by authorial fiat whatever details are needed to make it all turn out for the greater good, but _that's not how anything works in real life_. +I said that I thought people were missing this idea that the reason "truth is better than lies; knowledge is better than ignorance" is such a well-performing [injunction](https://www.lesswrong.com/posts/dWTEtgBfFaz6vjwQf/ethical-injunctions) in the real world (despite the fact that there's no law of physics preventing lies and ignorance from having beneficial consequences), is because [it protects against unknown unknowns](https://www.lesswrong.com/posts/E7CKXxtGKPmdM9ZRc/of-lies-and-black-swan-blowups). Of course an author who wants to portray an ignorance-maintaining conspiracy as being for the greater good, can assert by authorial fiat whatever details are needed to make it all turn out for the greater good, but _that's not how anything works in real life_. I started a new thread to complain about the attitude I was seeing (Subject: "Noble Secrets; Or, Conflict Theory of Optimization on Shared Maps"). When fiction in this world, _where I live_, glorifies Noble Lies, that's a cultural force optimizing for making shared maps less accurate, I explained. As someone trying to make shared maps _more_ accurate, this force was hostile to me and mine. I understood that "secrets" and "lies" are not the same thing, but if you're a consequentialist thinking in terms of what kinds of optimization pressures are being applied to shared maps, [it's the same issue](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist): I'm trying to steer _towards_ states of the world where people know things, and the Keepers of Noble Secrets are trying to steer _away_ from states of the world where people know things. That's a conflict. I was happy to accept Pareto-improving deals to make the conflict less destructive, but I wasn't going to pretend the pro-ignorance forces were my friends just because they self-identified as "rationalists" or "EA"s. I was willing to accept secrets around nuclear or biological weapons, or AGI, on "better ignorant than dead" grounds, but the "protect sadists from being sad" thing wasn't a threat to life; it was _just_ coddling people who can't handle reality, which made _my_ life worse. diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 77510bd..d8b0287 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -6,7 +6,7 @@ marked TODO blocks— ✓ last email and not bothering him [pt. 6] ✓ the Death With Dignity era [pt. 6] ✓ New York [pt. 6] -_ scuffle on "Yes Requires the Possibility" [pt. 4] +✓ scuffle on "Yes Requires the Possibility" [pt. 4] _ reaction to Ziz [pt. 4] _ "Unnatural Categories Are Optimized for Deception" [pt. 4] _ confronting Olivia [pt. 2] @@ -48,21 +48,8 @@ _ the story of my Feb./Apr. 2017 recent madness [pt. 2] it was actually "wander onto the AGI mailing list wanting to build a really big semantic net" (https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists) With internet available— -_ coveture -_ find an anti-Asian racist joke for a change -_ List of Lethalities -_ new version of "not coming out" also archived? -_ screenshot my last Facebook comment -_ atrocious sample-efficiency -_ date of longtermism EA forum comment -_ hypernym -_ Michael Bailey's new AGP in women study -_ what does "pervue" mean -_ archive.is https://twitter.com/KirkegaardEmil/status/1425334398484983813 -_ group yelling operation quotes -_ "Vassarite" coinage _ Nate would later admit that this was a mistake -_ indulgence +_ Michael Bailey's new AGP in women study _ "gene drive" terminology _ double-check "All rates" language _ footnote "said that he wishes he'd never published" @@ -70,7 +57,6 @@ _ hate-warp tag _ replace "Oh man oh jeez" Rick & Morty link _ Nevada bona fides _ Parfit's Hitchhiker -_ Heinlein on "Get the facts!" _ double-check correctness of Keltham-on-paternalism link _ Arbital TDT explanation _ find Sequences cite "if you don't know how your AI works, that's bad" @@ -2209,4 +2195,17 @@ Isaac Asimov wrote about robots in his fiction, and even the problem of alignmen /2017/Jan/from-what-ive-tasted-of-desire/ -] \ No newline at end of file +] + +> Similarly, a rationalist isn't just somebody who respects the Truth. +> All too many people respect the Truth. +> [...] +> A rationalist is somebody who respects the _processes of finding truth_. +https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms + +> Why is school like a boner? +> It’s long and hard unless you're Asian. + +Robert Heinlein +> “What are the facts? Again and again and again – what are the facts? Shun wishful thinking, ignore divine revelation, forget what “the stars foretell,” avoid opinion, care not what the neighbors think, never mind the unguessable “verdict of history” – what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts!” +https://www.goodreads.com/quotes/38764-what-are-the-facts-again-and-again-and-again \ No newline at end of file diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index e6a5bb7..5cb3d93 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -309,4 +309,7 @@ 02/22/2023,89283 02/23/2023,89488 02/24/2023,90029 -02/25/2023, \ No newline at end of file +02/25/2023,90434 +02/26/2023,90434 +02/27/2023,90953 +02/28/2023, \ No newline at end of file diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index f1a3dad..959237e 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -1,9 +1,10 @@ pre-memoir— _ Reply to Scott Alexander on Autogenderphilia +_ Book Review: Charles Murray's Facing Reality (short version) +_ Escort (cut §, needs title)? _ Book Review: Nevada (time permitting) _ Hrunkner Unnerby and the Shallowness of Progress (time permitting) _ I'm Dropping the Pseudonym From This Blog -_ Book Review: Charles Murray's Facing Reality (short version) memoir— _ (pt. 2) Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer diff --git a/notes/tech_tasks.txt b/notes/tech_tasks.txt index 982d314..f43f3c0 100644 --- a/notes/tech_tasks.txt +++ b/notes/tech_tasks.txt @@ -1,4 +1,3 @@ -take "-anchor" out of early anchor links make Markdown native footnote CSS match the plugin-based footnotes undo the Slate Starchive links roll my own pingbacks? (I didn't like the Pelican plugin) http://blog.mlindgren.ca/entry/2015/01/17/how-to-manually-send-a-pingback/ -- 2.17.1