Memento Mori

(Attention conservation notice: personal thoughts on the passing scene; previously, previously)

But always above you
The idea raises its head
What would I do if the Earth fell apart?
Who would I save, or am I not quite brave enough?

—Laura Barrett, "Deception Island Optimists Club"

Six or sixteen or twenty-one or forty-seven months later—depending on when you start counting—I think I'm almost ready to stop grieving and move on with my life. I have two more long blog posts to finish—one for the robot-cult blog restating my thesis about the cognitive function of categorization with somewhat more math this time and then using it to give an account of mimicry, and one here doing some robot-cult liturgical commentary plus necessary autobiographical scaffolding—and then I'll be done.

Not done writing. Done grieving. Done with this impotent rage that expects (normative sense) this world to be something other than what I know enough to expect (positive sense). Maybe I'll start learning math again.

Last week, I "e-ttended" the conference associated with this open-source scene I've been into for a while—although I've been so distracted by the Category War that I've landed exactly one commit in master in the last 13 months. (I think I'm still allowed to say "in master", although "whitelist" is out.)

Traditionally (since 2016), this has been my annual occasion to travel up to Portland (the real Portland, and not a cowardly obfuscation) and stay with friend of the blog Sophia (since 2017), but everything is remote this year because of the pandemic.

Only if I'm serious about exiting my grief loop, I need to stop being so profoundly alienated by how thoroughly the finest technical minds of my generation are wholly owned by Blue Egregore. I fear the successor ideology—the righteous glee with which they proclaim that everything is political, that anyone with reservations about the Code of Conduct is ipso facto a bigot, how empathy is as important if not more so than technical excellence ...

I can't even think of them as enemies. We're the same people. I was born in 1987 and grew up in California with the same beautiful moral ideal as everyone else. I just—stopped receiving updates a few years back. From their perspective, an unpatched copy of Social Liberalism 2009 must look hopelessly out-of-date with the Current Year's nexus of ideological coordination, which everyone wants to be corrigible to.

Or maybe I'm not even running unpatched Liberalism 2009? I'm still loyal to the beauti—to my interpretation of the beautiful moral ideal. But I've done a lot of off-curriculum reading—it usually begins with Ayn Rand, but it gets much worse. It ... leaves a mark. It's supposed to leave a mark on the world-model without touching the utility function. But how do you explain that to anyone outside of your robot cult?

One of the remote conference talks was about using our software for computational biology. There was something I wanted to say in the Discord channel, related to how I might want to redirect my energies after I'm done grieving. I typed it out in my Emacs *scratch* buffer, but, after weighing the risks for a few seconds, deleted a parenthetical at the end.

What I posted was:

really excited to hear about applying tech skills to biology; my current insurance dayjob is not terribly inspiring, and I've been wondering if I should put effort into making more of an impact with my career

The parenthetical I deleted was:

(e.g. if someone in the world is working on https://www.gwern.net/Embryo-selection and needs programmers)

It probably wouldn't have mattered either way, with so many messages flying by in the chat. In some ways, Blue Egregore is less like an ideology and more like a regular expression filter: you can get surprisingly far into discussing the actual substance of ideas as long as no one says a bad word like "eugenics".

—if we even have enough time for things like embryo selection to help, if AI research somehow keeps plodding along even as everything else falls apart. The GPT-3 demos have been tickling my neuroticism. Sure, it's "just" a language model, doing nothing more but predicting the next token of human-generated text. But you can do a lot with language. As disgusted as I am with my robot cult as presently constituted, the argument for why you should fear the coming robot apocalypse in which all will be consumed in a cloud of tiny molecular paperclips, still looks solid. But I had always thought of it as a long-term thing—this unspoken sense of, okay, we're probably all going to die, but that'll probably be in, like, 2060 or whatever. People freaking out about it coming soon-soon are probably just following the gradient into being a doomsday cult. Now the threat, and the uncertainty around it, feel more real—like maybe we'll all die in 2035 instead of 2060.

At some point, I should write a post on the causes and consequences of the psychological traits of fictional characters not matching the real-life distributions by demographic. The new Star Trek cartoon is not very good, but I'm obligated to enjoy it anyway out of brand loyalty. One of the main characters, Ens. Beckett Mariner, is brash and boisterous and dominantfriendly, but in a way that makes it clear that she's on top. If you've seen Rick and Morty, her relationship with Ens. Brad Boimler has the Rick and Morty dynamic, with Mariner as Rick. (Series creator Mike McMahan actually worked on Rick and Morty, so it likely is the same dynamic, not just superficially, but generated by the same algorithm in McMahan's head.)

Overall, I'm left with this uncanny feeling that Mariner is ... not drawn from the (straight) female distribution?—like she's a jockish teenage boy StyleGANed into a cute mulatto woman's body. That, given the Federation's established proficiency with cosmetic surgery, I'm almost forced to formulate the headcanon that she's an AGP trans woman. (The name "Beckett" doesn't help, either. Maybe I should expand this theory into a full post and try to pick up some readers from /r/DaystromInstitute, but maybe that would just get me banned.)

I wish I knew in more detail what my brain thinks it's picking up on here? (I could always be wrong.) It's important that I use the word distribution everywhere; I'm at least definitely not being one of those statistically-illiterate sexists. Most men also don't have that kind or degree of boisterous dominance; my surprise is a matter of ratios in the right tail.

I wish there was some way I could get a chance to explain to all my people still under the Egregore's control, what should be common knowledge too obvious to mention—that Bayesian surprise is not moral disapproval. Beckett Mariner deserves to exist. (And, incidentally, I deserve the chance to be her.) But I think the way you realistically get starships and full-body StyleGAN—and survive—is going to require an uncompromising focus on the kind of technical excellence that can explain in mathematical detail what black-box abstractions like "politics" and "empathy" are even supposed to mean—an excellence that doesn't fit past the regex filter.

But I don't expect to live to get the chance.

Submit to Reddit

(Post revision history)

Comments permit Markdown or HTML formatting.