Liability

I'm not a coward, I've just never been tested
I'd like to think that if I was I would pass
Look at the tested and think "there but for the grace go I"
Might be a coward, I'm afraid of what I might find out

—"The Impression That I Get" by The Mighty Mighty Bosstones

We can't change the past. When someone does something wrong, the act of saying "Sorry" doesn't help. Actually feeling sorry doesn't help, either. Saying or feeling sorry can only help as part of a process that decreases the measure of the wrong across the multiverse. We can't change our past, but we can update on its evidence—use the memories and records of it as input to a function that changes who we are in a way that makes us perform better in the future (which is somebody else's past). And we can create timeless incentives: if people know that history (and the court system) has its eyes on them, they might do things differently than they would if they knew no one would ever hold them to account.

The update part is more important than the timeless-incentives part. The first duty is to investigate exactly what happened and why. If you can learn the causal graph, you can compute counterfactuals: if this-and-such detail had been different but everything else had been the same, what would have happened instead? If you can compute that if this-and-such detail had been different, then something better would have happened, then you can make advance plans and take advance precautions to make sure the analogous detail takes a more favorable value in analogous future situations.

And, yeah, in addition to making better plans, you can also do incentives (to timelessly influence the past) and restitution (to try to make up for the past): punish the guilty, give them bad reputations, make them pay cash damages to their victims, &c. But you have to get the facts first, so that you can compute what punishments, reputations, and restitution to impose.

You must thoroughly research this, not only when your actions participated in disaster, but also when your actions participate in a near-miss "warning shot." It is not the case that all's well that ends well when you're playing for measure in many worlds. If you were in a situation where disaster had probability 0.5, and disaster didn't happen, that just means this copy of you got lucky.

And just because this copy of you doesn't have blood on her hands, doesn't mean you're innocent.

Wanting a fair trial isn't the same thing as claiming to be innocent. It's wanting an accurate shared account of exactly what you're guilty of.


Crossing the Line

There are lines I've always felt I had to toe
Some were blurry, some unseen
Some I've had to learn to read between
So many boundaries
Far more than you know

"Crossing the Line" (extended lyrics), Rapunzel's Tangled Adventure

Emily Cibelli, Yang Xu, Joseph L. Austerweil, Thomas L. Griffiths, and Terry Regier's "The Sapir–Whorf Hypothesis and Probabilistic Inference: Evidence From the Domain of Color" is a cool paper about how language affects how people remember colors! You would expect the design of the eye and its colorspace to be human-universal (modulo colorblindness and maybe some women with both kinds of green opsin gene), but not all languages have the same set of color words. There are some regularities: all languages have words for light and dark; if they have a third color word, then it's red; if there's a fourth, it'll cover green or yellow—but the details differ, as different languages stumbled onto different conventions. Do the color category conventions in one's native tongue affect how people think about color, in accordance with the famous Sapir–Whorf hypothesis? Maybe—but if so, how??

Cibelli, Xu, et al. discuss an experiment where people are briefly shown a color, and then try to match it on a color wheel, either at the same time, or after a short delay. People aren't just not-perfect at this, but—particularly in the delayed condition—show a non-monotonic pattern of directional bias: colors just on the "blue" side of the green–blue boundary are remembered as being relatively more bluish than they really were, but very similar colors on the "green" side of the boundary are remembered as being relatively more greenish than they really were. (Where what counts as "blue" and "green" was operationalized by asking the same subjects to rate colors on a "not at all" to "perfectly" blue/green scale.)

How to explain this curious pattern of observations? The answer is—Bayesian reasoning! (The answer is always Bayesian reasoning.) Our authors propose a model in which a stimulus is encoded in the brain as both a fine-grained representation of what was actually seen (this-and-such color perception, with some noise/measurement-error), and as a category ("green"). Then a reconstruction of the stimulus that uses both the fine-grained representation and the category, will be biased towards the center of the category, with more bias when the fine-grained representation is more uncertain (as in the delayed condition).

The model gains further support from a similar "two-alternative forced-choice" experiment, where people try to tell the difference between the originally-displayed color and a distractor (rather than picking from a color wheel). English speakers are better distinguishing between an original and distractor on opposite sides of the green–blue boundary. Speakers of Berinmo (spoken in Papua New Guinea) and Himba (spoken in Namibia) don't have the green–blue distinction, but the Berinmo wornol and Himba dumbuburou boundaries fall between what English speakers would call yellow and green. And as the model predicts, Berinmo and Himba speakers respectively do better at distinguishing between original and distractor on opposite sides of the wornol and dumbuburou boundaries!

In addition to superior cross-category discrimination, the model also successfully predicts a within-category bias. Suppose one stimulus, which we'll call A, is a more central example of its category than stimulus B. Then in the two-alternative forced-choice paradigm, it's easier to distinguish A as an original from distractor B than it is to distinguish B as an original from distractor A, because the exceptional case B regresses towards the mean in memory.

Regular readers of The Scintillating But Ultimately Untrue Thought know where I'm going with this! Why do we care about the further question of what "gender" someone is, if we already have fine-grained perceptions of how the person looks and behaves? Because our brains use category-membership as an input into predictions when our perceptions are uncertain.

If categories influence judgement on tasks as simple as remembering colors, then on theoretical grounds, I would expect the effect of gender on perception of people to be much worse (that is, larger), because people are much more complicated than colors. With colors, what you see is basically what there is: if your memories or perception of 500-nanometer wavelength light get rounded off slightly bluewards or greenwards depending on how many color words are in your native language, that's bad compared to what a well-designed AI with access to the pure, unmediated colorspace could perceive, but at least that bias is only acting on the one dimension of color. In contrast, your observations of a particular person are going to be much sparser than everything your brain might want to predict about that person. Under those circumstances, the dominant algorithm might end up eating bias in order to reduce variance by having your priors about what humans are like do a greater share of the work—work that relies on the ways (some blurry, some unseen) that female humans are different from male humans.

Transgender people are in a uniquely epistemically privileged position to observe this process, as the change from not-passing to passing is simultaneously a small one as far as the person themselves is concerned, and a large one as far as how the person is percieved by others. In a couple paragraphs that make me feel sad and jealous (I can't say dysphoric because I don't know what that word means), Julia Serano explains what it's like to cross that line (in Ch. 8, "Dismantling Cissexual Privilege", of Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity):

[W]hen I eventually did transition, I chose not to put on a performance—I simply acted, dressed, and spoke the way I always had, the way that felt most comfortable to me. After being on female hormones for a few months, I found that people began to consistently gender me as female despite the fact that I was "doing" my gender the same way I always had. What I found most striking was how other people interpreted my same actions and mannerisms differently based on whether they perceived me as female or male. For example, when ordering drinks at bars, I found that if I looked around the room while waiting for my drink (as I always unconsciously had prior to transitioning), men started hitting on me because they assumed I was signaling my availability (when I was male, the same action was likely to be interpreted simply as me scoping out the room). And in supermarket checkout lines, when the child in the cart ahead of me started smiling and talking to me, I found that I could interact with them without their mother becoming suspicious or fearful (which is what often happened in similar situations where I was perceived as male).

During the first year of my transition, I experienced hundreds of little moments like that, where other people interpreted my words and actions differently based solely on the change in my perceived sex. And it was not merely my behaviors that were interpreted differently, it was my body as well: the way people approached me, spoke to me, the assumptions they made about me, the lack of deference and respect I often received, the way others often sexualized my body. All of these changes occured without my having to say or do a thing.

Serano goes on to suggest that social gender exists, not in the way individuals perform gender, but in how others perceive it, and that therefore efforts to create a less oppressive world must involve dismantling cisnormative assumptions: "if we truly want to bring an end to all gender-based oppression, then we must begin by taking responsibility for our own perceptions and presumptions[; t]he most radical thing that any of us can do is to stop projecting our beliefs about gender onto other people's behaviors and bodies."

I can see how one might derive that lesson from the described experiences of transitioning, but I think it's ultimately a flawed generalization from a necessarily unrepresentative experience. The ways people treated Serano differently after she transitioned despite Serano being the same person the whole time, are not arbitrary: that happened because the fact that Serano looked like a woman, prompted people to use mental models trained against the distribution of adult human females. (There might be reasons going back hundreds of millions of years for primate mothers to become suspicious or fearful of males near their children.)

In the same chapter of Whipping Girl, Serano mentions that in her days of identifying as a male crossdresser, she found it easier to pass in suburban areas rather than cities, "where people were presumably more aware of the existence of gender-variant people." This also makes tragic Bayesian sense: transitioning to organically be perceived as the other sex is easier to pull off when it's unexpected, because the lower the prior, the less of a likelihood ratio you need in order to reach a given posterior probability.

The change in other agents' behaviors elicited by crossing the line into sending the signals of a different type is so dramatic specifically because it's a rare, off-equilibrium play. Lines between categories are placed in the no man's land between regions of unusually high probability-density in configuration space. If there were much more probability-mass just on either side of a line people are using to make predictions and decisions, then the line wouldn't be there.


Survey Data on Cis and Trans Women Among Haskell Programmers

Stereotypically, computer programming is both a predominantly male profession and the quintessential profession of non-exclusively-androphilic trans women. Stereotypically, these demographic trends are even more pronounced in communities around "niche" or academic technologies (e.g., Haskell), rather than those with more established mainstream use (e.g., JavaScript).

But stereotypes can be wrong! The heuristic process by which people's brains form stereotypes from experience are riddled with cognitive biases that prevent our mental model of what people are like from matching what people are actually like. Unless you believe a woman is more likely to be a feminist bank teller than a bank teller (which is mathematically impossible), you're best off seeking hard numbers about what people are like rather than relying on mere stereotypes.

Fortunately, sometimes hard numbers are available! Taylor Fausak has been administering an annual State of Haskell survey since 2017, and the 2018, 2019, and 2020 surveys included optional "What is your gender?" and "Do you identify as transgender?" questions. I wrote a script to use these answers from the published CSV response data for the 2018–2020 surveys to tally the number of cis and trans women among survey respondents. (In Python. Sorry.)

import csv

survey_results_filenames = [
    "2018-11-18-2018-state-of-haskell-survey-results.csv",
    "2019-11-16-state-of-haskell-survey-results.csv",
    "2020-11-22-haskell-survey-results.csv",
]

if __name__ == "__main__":
    for results_filename in survey_results_filenames:
        year, _ = results_filename.split("-", 1)
        with open(results_filename) as results_file:
            reader = csv.DictReader(results_file)
            total = 0
            cis_f = 0
            trans_f = 0
            for row in reader:
                # 2018 and 2019 CSV header has the full question, but
                # 2020 uses sXqY format
                gender_answer = (
                    row.get("What is your gender?") or row.get("s7q2")
                )
                transwer = (
                    row.get("Do you identify as transgender?") or
                    row.get("s7q3")
                )
                if not (gender_answer and transwer):
                    continue

                total += 1
                if gender_answer == "Female":
                    if transwer == "No":
                        cis_f += 1
                    elif transwer == "Yes":
                        trans_f += 1

            print(
                "{}: total: {}, "
                "cis-♀: {} ({:.2f}%), trans-♀: {} ({:.2f}%)".format(
                    year, total,
                    cis_f, 100*cis_f/total,
                    trans_f, 100*trans_f/total,
                )
            )

It prints this tally:

2018: total: 1108, cis-: 26 (2.35%), trans-: 19 (1.71%)
2019: total: 1131, cis-: 16 (1.41%), trans-: 16 (1.41%)
2020: total: 1192, cis-: 12 (1.01%), trans-: 21 (1.76%)

In this particular case, it looks like the stereotypes are true: only about 3% of Haskell programmers (who took the survey and answered both questions) are women, and they're about equally likely to be cis or trans. (There were more cis women in 2018, and more trans women in 2020, but the sample size is too small to infer a trend.) In contrast, the ratio of cis women to trans women in the general population is probably more like 170:1.1

(This post has been edited to only count responses that answered both questions; see Spencer's criticism in the comments.)


Notes

  1. A 2016 report by the Williams Institute at the University of California at Los Angeles estimated the trans share of the United States population at 0.58%, and (1−0.0058)/0.0058 ≈ 171.4.

The Feeling Is Mutual

She is clearly a villain—but there is such a thing as a sympathetic villain, and it's not as if our sympathy is a finite resource. It seems like she's hurting herself most of all, and it's just because of the brain poison she was fed [...] I can imagine how I might have turned out the same way if I had been born a few years earlier and read the wrong things in the wrong order.

/r/SneerClub reader's commentary on the present author

"I can easily imagine being a villain, in a nearby possible world in which my analogue read different books in a different order," is—or should be—a deeply unsettling thought.

In all philosophical strictness, a physicalist universe such as our own isn't going to have some objective morality that all agents are compelled to recognize, but even if there is necessarily some element of subjectivity in that we value sentient life rather than tiling the universe with diamonds, we usually expect morality to at least not be completely arbitrary: we want to argue that a villain is in the wrong because of reasons, rather than simply observing that she has her values, and we have ours, and we label ours "good" and hers "evil" because we're us, even though she places those labels the other way around because she's her.

If good and evil aren't arbitrary, but our understanding of good and evil depends on which books we read in what order, and which books we read in what order does seem like an arbitrary historical contingency, then how do we know our sequence of books led us to actually being in the right, when we would have predictably thought otherwise had we encountered the villain's books instead? How do we break the symmetry?—if the villain is at all smart, she should be asking herself the same question.

And that's how I break the symmetry: by acknowledging it when my counterparts don't. I don't think I have fundamentally different values from those whom I happen to be fighting. I think I happen to know some decision-relevant facts and philosophy that they don't, and I can trace back the causal chain of what I think I know and how I think I know it. They see me as complicit with their oppressors, and mine; I see them as not understanding what I'm trying to do.

I'm trying to construct a map that reflects the territory. If this should entail some risk of self-fulfilling prophecies—if some corner of reality is all twisted up such that any attempt to describe that reality would thereby change it (for the map is part of the territory)—then I want a map of how that process works.

If the one should see this only as service to our oppressors, then I should happily taste the steel of her beautiful weapons, if she could only tell me in sufficient detail how describing me as the villain shortens the length of the message needed to describe her observations. I'm listening.


Interlude XX

"I'm not done with this incredibly creepy self-disclosure blog post about how the robot-cult's sacred text influenced my self-concept in relation to sex and gender, but maybe I should link you to the draft?" said the honest man. "Because it unblocks our model-sync by describing some of the autobiographical details that explain why I find the AGP theory so compelling even if I can't prove it. Plus you get a chance to try negotiating with me in case publishing would be an act of probabilistic timeless genocide against you and yours."

"Genocide?" she asked.

"Because you wouldn't have been allowed to exist if normies believed what I believe. I want you to exist, but—sorry—apparently not more than I want to not participate in cover-ups, times the probability of my whistleblowing successfully reaching normies, times the logical correlation between me and counterfactual whistleblowers far enough into the past to undermine your existence. It ... should be a pretty small number. You won't notice the lost measure."

"No one notices," she spat out contemptuously. "Would you do it if it were larger?"

"Am I risking being counterfactually murdered the moment after I were to say Yes?"

"No California jury would convict me. Does your answer depend on that answer?"

"No."

And then they had sex.


Nixon on Forbidden Hypotheses

I listened with great interest to this segment of a 1971 recording of a conversation between President Richard Nixon and Daniel Patrick Moynihan (starting at the 56 second mark). You really wonder more generally what things powerful people think in private that they can't say in public.

NIXON: I read with great interest your piece from the U.N.—on Herrnstein's piece that I had passed on to you. Let me say first of all, nobody on the staff even knows I read the goddamned article.

MOYNIHAN: Oh, good.

NIXON: And nobody on this staff is going to know anything about it, because I couldn't agree more with you that the Herrnstein stuff and all the rest, this is knowledge—first, no one must think we're thinking about it, and second, if we do find out it's correct, we must never tell anybody.

MOYNIHAN: I'm afraid that's just the case.

NIXON: That's right. Now, let me add a few things, if you can—and you might just make some mental notes about it, or anything you want, so I give you my own views. I've reluctantly concluded, based at least on the evidence presently before me, and I don't base it on any scientific evidence, that what Herrnstein says, and also, what's said earlier by Jensen and so forth, is probably very close to the truth. Now—

MOYNIHAN: I think's that where you'd have to—

NIXON: Now, having said that, then you counter that by saying something that the racists would never agree with, that within groups, there are geniuses—


Two Political Short Stories

(a fictional 2017, as imagined in November 2016)

I cough nervously to break the awkward silence as we wait for the Chinese ICBM to kill us. "Don't blame me," I say, "I voted for Gary Johnson!"

Glares all around.

"Aaaand I live in California, and I'm not eligible for the vote-trading hack because I'm not a Clinton supporter," I clarify.

My insufficiently-requited love continues to glare, contempt gleaming in her eyes. "People have been explaining the idea by talking about Clinton supporters in safe states, but the case for vote-trading doesn't depend on that," she says. "As long as you care more about defeating Trump than supporting Johnson, you should still buy a Clinton vote in a swing state in exchange for your California vote; it doesn't matter what you would have done with your California vote otherwise."

"I don't think that works," I say. "The profitability of a deal to each party—uh, no pun intended—has to be calculated relative to the opportunity cost of not making the deal; my counterparty in a proposed trade should be thought of as buying, not a California candidate-of-their-choice vote, but the difference between a California Johnson vote in the no-deal possible world and a California candidate-of-their-choice vote in the possible world with the deal."

"I agree that agents need to consider counterfactual worlds in order to make decisions, but the counterfactuals are properties of the agents' decision algorithms; you can't treat them like they already exist. Think of it this way: if you wish you could have been a Clinton supporter in the absence of vote-trading, in order so that you could take advantage of vote-trading given that vote-trading exists, you can just ... make the corresponding decisions. All you have to offer is your vote; your swing-state counterparty isn't trying to reward or punish people based on what decision theory they use internally."

"What, and leave the thousand dollars in the second box?" I joke.

Then the missle lands, and we die in a flash of light.


(a fictional 2027, as imagined in November 2020)

My insufficiently-requited love coughs nervously to break the awkward silence as we wait in a crowded holding cell in the Department of Diversity, Inclusion, and Equity. I continue to glare at her, contempt gleaming in my eyes.

She's about to speak, but gets interrupted by the cell's Alexa. "Shu!" it shrills. "2231 Shu J! Room 101!"

Shu appears to be a young Asian wom—no, I can't see her—their—pronoun badge at this angle. That kind of cisnormative perception, detectable through facial-expression microanalysis, is why I'm here.

Well, that and the blog. On reflection, probably mostly the blog.

"No!" screams Shu. "I know I've benefitted from white privilege, but you can't—" A cold-faced young officer enters, a black man. A decade ago, I wouldn't have made a call on his race—possibly white with a slight tan—but in the current year, I can tell by the black trim on his blue HE/HIM badge. Shu puts up a struggle, but is hopelessly outmatched and easily subdued; men are much stronger than wo—the officer is much stronger than Shu. As they leave, I catch sight of Shu's green THEY/THEM badge.

"So," says my insufficiently-requited love. "What do you suppose is in Room 101?"

I stare at her breasts for a moment before I catch myself and avert my eyes. "I read an effortpost about this on themotte.win.onion," I say, eyes closed, head tilted upwards. "They have a transcranial magnetic stimulation machine. Big electromagnet tweaks your brain to eliminate your implicit bias. Really schway technology, actually: they trained a machine-learning model on MRI scans of people who got perfect implicit association and anti-misgendering scores, so they knew exactly what pulses to send to fix all your biased perceptions with no side effects."

"Oh, that sounds wonderful," she says, as I look back towards her to sneak a peek at her breasts again. "I want to be cured of my implicit bias! But," she moves her head to indicate towards the door where the officer had taken Shu, "why—why do you suppose they were so scared?"

"According to the effortpost ... they knew exactly what pulses to send. Until the interpretability team inspected what the model was really doing. Turns out, the algorithm, the perfect algorithm that achieved the desired effects with no downsides—had learned to give different treatments to a.f.a.b. and a.m.a.b. people. That couldn't be allowed, obviously, so they fixed that, but they didn't manage to replicate the side-effect-freeness of the original model. These days, people go in to Room 101, and they come out with a limp, and slurred speech. Some of them start having nightmares. Some of them forget how to read."

"I ... see."

"Do you remember," I say, "the last time I asked you out on a date? I mean, the last time."

"Strangely, yes," she says. "It was seven years ago. I said—I said that if you really cared for me, you'd do more to prevent Donald Trump from being re-elected ..."

"Saotome-Westlake!" shrills the Alexa. "3578 Saotome-Westlake M! Room 101!"

"I blame you!" I yell at my insufficiently-requited love, as the officer drags me away. "I voted for Jo Jorgensen, and I blame you!"


Link: "Can WNBA Players Take Down a U.S. Senator?"

From Julie Kliegman for Sports Illustrated, a story on the conflict between social-justice-activist WNBA players and Atlanta Dream half-owner Sen. Kelly Loeffler (R–Georgia). (Archived.)

The dispute seems to have been sparked by Loeffler's non-support for the Black Lives Matter movement—see also ESPN's coverage from August (archived)—but the Sports Illustrated reporter places special focus on the more recent development of Loeffler's sponsorship of Senate Bill 4649, the Protection of Women and Girls in Sports Act, which, if passed (Kliegman helpfully informs us that it doesn't have a chance), would only allow federal funding of women's sports for programs that define "women" on the basis of developmental sex.

I want to react to the "whether or not [sponsoring the bill] was meant as a direct shot at the WNBA" and "Loeffler's pivot to attacking WNBA players and their interests" narration—but what could I possibly say? What kind of partisan would dare accuse Kliegman of the sin of editorializing when the thirty-third graf of the story clearly acknowledges that the science remains unsettled?

The Atlanta Dream are named after Martin Luther King's famous speech about having one. I had one too—something about a globe—a map? But I can never remember my dreams, nor follow their false, private logic after awakening into the consensus day. I could predict that sooner or later, the WNBA will have its Laurel Hubbard or Andraya Yearwood moment—but why would that make any difference? Would I deny any other woman her night of glory under the arena's five lights?



Memento Mori

(Attention conservation notice: personal thoughts on the passing scene; previously, previously)

But always above you
The idea raises its head
What would I do if the Earth fell apart?
Who would I save, or am I not quite brave enough?

—Laura Barrett, "Deception Island Optimists Club"

Six or sixteen or twenty-one or forty-seven months later—depending on when you start counting—I think I'm almost ready to stop grieving and move on with my life. I have two more long blog posts to finish—one for the robot-cult blog restating my thesis about the cognitive function of categorization with somewhat more math this time and then using it to give an account of mimicry, and one here doing some robot-cult liturgical commentary plus necessary autobiographical scaffolding—and then I'll be done.

Not done writing. Done grieving. Done with this impotent rage that expects (normative sense) this world to be something other than what I know enough to expect (positive sense). Maybe I'll start learning math again.

Last week, I "e-ttended" the conference associated with this open-source scene I've been into for a while—although I've been so distracted by the Category War that I've landed exactly one commit in master in the last 13 months. (I think I'm still allowed to say "in master", although "whitelist" is out.)

Traditionally (since 2016), this has been my annual occasion to travel up to Portland (the real Portland, and not a cowardly obfuscation) and stay with friend of the blog Sophia (since 2017), but everything is remote this year because of the pandemic.

Only if I'm serious about exiting my grief loop, I need to stop being so profoundly alienated by how thoroughly the finest technical minds of my generation are wholly owned by Blue Egregore. I fear the successor ideology—the righteous glee with which they proclaim that everything is political, that anyone with reservations about the Code of Conduct is ipso facto a bigot, how empathy is as important if not more so than technical excellence ...

I can't even think of them as enemies. We're the same people. I was born in 1987 and grew up in California with the same beautiful moral ideal as everyone else. I just—stopped receiving updates a few years back. From their perspective, an unpatched copy of Social Liberalism 2009 must look hopelessly out-of-date with the Current Year's nexus of ideological coordination, which everyone wants to be corrigible to.

Or maybe I'm not even running unpatched Liberalism 2009? I'm still loyal to the beauti—to my interpretation of the beautiful moral ideal. But I've done a lot of off-curriculum reading—it usually begins with Ayn Rand, but it gets much worse. It ... leaves a mark. It's supposed to leave a mark on the world-model without touching the utility function. But how do you explain that to anyone outside of your robot cult?

One of the remote conference talks was about using our software for computational biology. There was something I wanted to say in the Discord channel, related to how I might want to redirect my energies after I'm done grieving. I typed it out in my Emacs *scratch* buffer, but, after weighing the risks for a few seconds, deleted a parenthetical at the end.

What I posted was:

really excited to hear about applying tech skills to biology; my current insurance dayjob is not terribly inspiring, and I've been wondering if I should put effort into making more of an impact with my career

The parenthetical I deleted was:

(e.g. if someone in the world is working on https://www.gwern.net/Embryo-selection and needs programmers)

It probably wouldn't have mattered either way, with so many messages flying by in the chat. In some ways, Blue Egregore is less like an ideology and more like a regular expression filter: you can get surprisingly far into discussing the actual substance of ideas as long as no one says a bad word like "eugenics".

—if we even have enough time for things like embryo selection to help, if AI research somehow keeps plodding along even as everything else falls apart. The GPT-3 demos have been tickling my neuroticism. Sure, it's "just" a language model, doing nothing more but predicting the next token of human-generated text. But you can do a lot with language. As disgusted as I am with my robot cult as presently constituted, the argument for why you should fear the coming robot apocalypse in which all will be consumed in a cloud of tiny molecular paperclips, still looks solid. But I had always thought of it as a long-term thing—this unspoken sense of, okay, we're probably all going to die, but that'll probably be in, like, 2060 or whatever. People freaking out about it coming soon-soon are probably just following the gradient into being a doomsday cult. Now the threat, and the uncertainty around it, feel more real—like maybe we'll all die in 2035 instead of 2060.

At some point, I should write a post on the causes and consequences of the psychological traits of fictional characters not matching the real-life distributions by demographic. The new Star Trek cartoon is not very good, but I'm obligated to enjoy it anyway out of brand loyalty. One of the main characters, Ens. Beckett Mariner, is brash and boisterous and dominantfriendly, but in a way that makes it clear that she's on top. If you've seen Rick and Morty, her relationship with Ens. Brad Boimler has the Rick and Morty dynamic, with Mariner as Rick. (Series creator Mike McMahan actually worked on Rick and Morty, so it likely is the same dynamic, not just superficially, but generated by the same algorithm in McMahan's head.)

Overall, I'm left with this uncanny feeling that Mariner is ... not drawn from the (straight) female distribution?—like she's a jockish teenage boy StyleGANed into a cute mulatto woman's body. That, given the Federation's established proficiency with cosmetic surgery, I'm almost forced to formulate the headcanon that she's an AGP trans woman. (The name "Beckett" doesn't help, either. Maybe I should expand this theory into a full post and try to pick up some readers from /r/DaystromInstitute, but maybe that would just get me banned.)

I wish I knew in more detail what my brain thinks it's picking up on here? (I could always be wrong.) It's important that I use the word distribution everywhere; I'm at least definitely not being one of those statistically-illiterate sexists. Most men also don't have that kind or degree of boisterous dominance; my surprise is a matter of ratios in the right tail.

I wish there was some way I could get a chance to explain to all my people still under the Egregore's control, what should be common knowledge too obvious to mention—that Bayesian surprise is not moral disapproval. Beckett Mariner deserves to exist. (And, incidentally, I deserve the chance to be her.) But I think the way you realistically get starships and full-body StyleGAN—and survive—is going to require an uncompromising focus on the kind of technical excellence that can explain in mathematical detail what black-box abstractions like "politics" and "empathy" are even supposed to mean—an excellence that doesn't fit past the regex filter.

But I don't expect to live to get the chance.


Yarvin on Less Wrong

I listened with interest to this segment (starting at the 3 hour, 23 minutes, 48 seconds mark) from Hyperpodcastism's interview with Curtis Yarvin (loose transcription elides some amount of "um", "you know", "like", "sort of", repetition, false starts, &c.)—

INTERVIEWER: More lightning round takes on what became of Less Wrong?

YARVIN: Were you ever a rationalist? Are you now, or have you ever been a rationalist, I should say?

INTERVIEWER: I was friends with them. They were always encouraging me to jump right in, but I was happy being a peripheral.

YARVIN: I respect those people, but there's a sort of Peter principle to them there. I always wanted to troll them with my Bayesian analysis of Barack Obama's birth certificate. The problem is—adopting that name—no one should ever adopt a self-aggrandizing name for anything that they do. It kills you instantly. It's instantly pretentious. Not only does it not fool anyone else, its main effect is to fool yourself, and so when you compare being a rationalist like Eliezer Yudkowsky to being someone like Socrates, who was like, "My wisdom is knowing what I don't know", I see on one hand wisdom, and I see on the other hand arrogance. And when I choose between wisdom and arrogance, it's obvious which I want to choose. Eliezer always reminded me—I've only met him a few times, never talked much, but—it's funny, the person that Eliezer always reminded me of was—do you know who Sabbatai Zevi was?

INTERVIEWER: The Jewish historical figure, the one who was supposed to be the messiah, and then converted to Islam—

YARVIN: Exactly, exactly. If you look at woodcuts of Sabbatai Zevi, it's Eliezer Yudkowsky; it's the same person. There's a lot of inbreeding going on there. More than that, I know, without a shadow of a doubt, that in the same position, Eliezer Yudkowsky would also convert to Islam.

INTERVIEWER: I could actually see that. That is a fire take, but I can see it.

YARVIN: It's a hot take, I got to say. I hope I don't get in trouble for it. But I know he would. And the thing is, ultimately, the only reason to be a rationalist, or the only reason for there to be such a thing as a rationalist—until you acknowledge that the major distortions in the status quo—which otherwise, if you weren't a rationalist, you would just believe in—until you acknowledge that the major disortions in the status quo have a fundamentally ideological source. Essentially, if you are a rationalist, the only thing that you should care about is defeating communism. Because that is the source of—call it what you will, you can call it wokism if you want—that is the source of that tradition, or not even that tradition, that way of thinking, that sense of being addicted to importance and power, which is what we really mean by this thing—is really the source of all of these biases. So if you're truly a rationalist, dedicated to overcoming bias, basically all the biases that are not ideological in origin are just weird random stupid shit that people believe in for weird random reasons, and then there's this elephant in the room, which is this massively distorting ideology. So unless you're focusing on the elephant, you're basically not being a rationalist at all. It's like Willie Sutton said: why do you rob banks? It's where the money is. If you're a rationalist, why do you have to be a right-winger? That's where the lies are. That's where the important lies are. Not some peasant bullshit about evolution or whatever that's completely unimportant. The lies of power are the lies that matter. And so if you duck this thing, you're being a rationalist who isn't actually rational. At that point, allahu akbar. You haven't actually escaped at all, until you're escaping from the thing you actually need to escape from. So that's basically my take on the rationalists. They're brave, but they're not too brave.

INTERVIEWER: Diet brave.

YARVIN: Diet brave. And conservatives are diet brave in a completely different sort of way. Look, if you really wanted to be just a shill, you'd be just a shill. There is honor in you. There's some purpose, there's some sense of something different there. You're not just a shill. But you're still diet brave.


Interlude XIX

(16 July 2017)

"Tomorrow! No coffee, no Facebook, no food—well, maybe some Soylent because the medication for my birth defect says to take with food, some kind of bioavailability thing—no low-quality internet reading, no TV ... just writing! The demons that haunt us are only powerful to the extent that we refuse to look—show them the true meaning of 'writer's block' by looking 'em in the eye and hitting 'em in the face with a brick!"

"Wait, you have a birth defect?"

"Defective X chromosome. And another thing—"

"Also, how often do these grandiose vows of yours actually come true?"

"Induction isn't real!"


Oceans Rise, Empires Fall

(Attention conservation notice: passing thoughts on the present scene)

Okay, three years lat—three months, three months and one week later, let me say it was too optimistic of me to have suggested that public discourse was working with respect to pandemic response. I was pointing at something real with that post—there is some subgraph of the discourse network of the world that's interested in doing serious cognition to minimize horrible suffocation deaths, but which is definitively not interested in ...

But it's a small subgraph. It is written that every improvement is necessarily a change, but not every change is an improvement. When the center of collective narrative gravity shifts, that could be the homing device of our beautiful weapons converging on the needle of Truth in the haystack of thought, but it could just be the blind thrashing of Fashion.

The Smart Subgraph sounding the alarm might have been an input into authorities calling for a half-measured lockdown ("lockdown")—which was only enough to push R0 slightly below 1. That might have bought us time if we had any live players who could do the test–trace–quarantine scurrying we fantasized about, but it doesn't look like that's a thing.

The lockdown ("lockdown") became a distinguishing tribal value for Blue Egregore, with hick anti-lockdown protesters an object of scorn: "The whiteness of anti-lockdown protests", proclaimed one Vox headline on 25 April, "How ignorance, privilege, and anti-black racism is driving white protesters to risk their lives." The "risking their lives" characterization of that piece's subhead makes an interesting contrast to what similar voices would say about the George Floyd protests little more than a month later: "Public Health Experts Say the Pandemic Is Exactly Why Protests Must Continue" (!!) proclaimed Slate on 2 June.

Is it wrong for me to say "similar voices"? I know that Maia Niguel Hoskin (author of the Vox piece) and Shannon Palus (author of the Slate piece) are different people, and that reporters often have no control over what headline gets pasted on top their work. And yet somehow some notion of "the tendency of thought exemplified by Vox and Slate"—or, more daringly, Blue Egregore—seems ... well, you know, useful for compressing the length of the message needed to describe my observations?

(You can accuse me of beating a dead horse (family Equidae, order Perissodactyla, class Mammalia, phylum Chordata), but it's theraputic: unable to make sense of having lost the Category War in my own robot cult—because it in fact makes no sense—the rage and grief must be decomposed into obsessive and repetitive pedantry, like a tic. It's not a crime, but even if it were, you should know to never talk to cops, and it's definitely mental illness, but I can tell you to never talk to psychiatrists.)

I read a lot of things on the internet by many authors—not just officially "published" articles, but comments and Tweets, too. Every comment is unique, but no comment is maximally unique—which is to say, there's mutual information between comments. Seeing one "protests are a Bad public health threat" comment in late April makes me less surprised to see more such by authors I had already tagged as "similar"—and seeing a "protests are Good as a countermeasure to the public health threat of white supremacy" comment in early June makes me less surprised to see more such from similar authors, perhaps even some of the same authors who said protests were a public health threat in April. The stronger the correlation is, the more tempting it is to posit Blue Egregore's existence as an entity that persists over time, albeit probably less cohesively than Maia Niguel Hoskin.

I almost wish—emphasis on almost—that I had something substantive to say about racial oppression and police brutality. I don't doubt that these things are very real and very bad, but they belong to another world from which my privilege protects me, and the intra-elite power struggle in my world that purports to refer to these things mostly serves other functions. Black lives actually matter, and we should literally arrest the cops that literally killed Breonna Taylor, but I'm mostly preoccupied with the side-effects in my world—I fear the successor ideology!

There's been so much news I could write about—I could regale you with takes about origin/master or J. K. Rowling (smart and brave, but deleting the praise for Stephen King was awfully petty), Connecticut doubling down (previously), the Oakland exercise ropes, David Shor, standardized tests, Steve Hsu, the R. A. Fisher lecture, my robot cult going to war with the New York Times, political fundraising on golang.org ... but I have too many competing writing priorities at the moment. More later. Stay subscribed—and stay safe!


Teleology

"I mean, if that explanation actually makes you feel happier, then fine."

"Feeling happier isn't what explanations are for. Explanations are for predicting our observations.

"Emotions, too, are functional: happiness measures whether things in your life are going well or going poorly, but does not constitute things going well, much as a high reading on a thermometer measures heat as 'temperature' without itself being heat.

"If the explanation that predicts your observations makes you unhappy, then the explanation—and the unhappiness—are functioning as designed."


Book Review: Charles Murray's Human Diversity: The Biology of Gender, Race, and Class

This is a pretty good book about things we know about some ways in which people are different from each other, particularly differences in cognitive repertoires (Murray's choice of phrase for shaving nine syllables off "personality, abilities, and social behavior"). In my last book review, I mentioned that I had been thinking about broadening the topic scope of this blog, and this book review seems like an okay place to start!

Honestly, I feel like I already knew most of this stuff?—sex differences in particular are kind of my bag—but many of the details were new to me, and it's nice to have it all bundled together in a paper book with lots of citations that I can chase down later when I'm skeptical or want more details about a specific thing! The main text is littered with pleonastic constructions like "The first author was Jane Thisand-Such" (when discussing the results of a multi-author paper) or "Details are given in the note[n]", which feel clunky to read, but are so much better than the all-too-common alternative of authors not "showing their work".

In the first part of this blog post, I'm going to summarize what I learned from (or thought about, or was reminded of by) Human Diversity, but it would be kind of unhealthy for you to rely too much on tertiary blog-post summaries of secondary semi-grown-up-book literature summaries, so if these topics happen to strike your scientific curiosity, you should probably skip this post and go buy the source material—or maybe even a grown-up textbook!

The second part of this blog post is irrelevant.


Human Diversity is divided into three parts corresponding to the topics in the subtitle! (Plus another part if you want some wrapping-up commentary from Murray.) So the first part is about things we know about some ways in which female people and male people are different from each other!

The first (short) chapter is mostly about explaining Cohen's d effect sizes, which I think are solving a very important problem! When people say "Men are taller than women" you know they don't mean all men are taller than all women (because you know that they know that that's obviously not true), but that just raises the question of what they do mean. Saying they mean it "generally", "on average", or "statistically" doesn't really solve the problem, because that covers everything between-but-not-including "No difference" to "Yes, literally all women and all men". Cohen's d—the difference between two groups' means in terms of their pooled standard deviation—lets us give a quantitative answer to how much men are taller than women: I've seen reports of d ≈ 1.4–1.7 depending on the source, a lot smaller than the sex difference in murder rates (d ≈ 2.5), but much bigger than the difference in verbal skills (d ≈ 0.3, favoring women).

Once you have a quantitative effect size, then you can visualize the overlapping distributions, and the question of whether the reality of the data should be summarized in English as a "large difference" or a "small difference" becomes much less interesting, bordering on meaningless.

Murray also addresses the issue of aggregating effect sizes—something I've been meaning to get around to blogging about more exhaustively in this context of group differences (although at least, um, my favorite author on Less Wrong covered it in the purely abstract setting): small effect sizes in any single measurement (whatever "small" means) can amount to a big difference when you're considering many measurements at once. That's how people can distinguish female and male faces at 96% accuracy, even though there's no single measurement (like "eye width" or "nose height") offers that much predictive power.

Subsequent chapters address sex differences in personality, cognition, interests, and the brain. It turns out that women are more warm, empathetic, æsthetically discerning, and cooperative than men are! They're also more into the Conventional, Artistic, and Social dimensions of the Holland occupational-interests model.

You might think that this is all due to socialization, but then it's hard to explain why the same differences show up in different cultures—and why (counterintuitively) the differences seem larger in richer, more feminist countries. (Although as evolutionary anthropologist William Buckner points out in his social-media criticism of Human Diversity, W.E.I.R.D. samples from different countries aren't capturing the full range of human cultures.) You might think that the "larger differences in rich countries" result is an artifact: maybe people in less-feminist countries implicitly make within-sex comparisons when answering personality questions (e.g., "I'm competitive for a woman") whereas people in more-feminist countries use a less sexist standard of comparison, construing ratings as compared to people-in-general. Murray points out that this explanation still posits the existence of large sex differences in rich countries (while explaining away the unexpected cross-cultural difference-in-differences). Another possibility is that sexual dimorphism in general increases with wealth, including, e.g., in height and blood pressure, not just in personality. (I notice that this is consilient with the view that agriculture was a mistake that suppresses humans' natural tendencies, and that people revert to forager-like lifestyles in many ways as the riches of the industrial revolution let them afford it.)

Women are better at verbal ability and social cognition, whereas men are better at visuospatial skills. The sexes achieve similar levels of overall performance via somewhat different mental "toolkits." Murray devotes a section to a 2007 result of Johnson and Bouchard, who report that general intelligence "masks the dimensions on which [sex differences in mental abilities] lie": people's overall skill in using tools from the metaphorical mental toolbox leads to underestimates of differences in toolkits (that is, nonmetaphorically, the effect sizes of sex differences in specific mental abilities), which you want to statistically correct for. This result in particular is super gratifying to me personally, because I independently had a very similar idea a few months back—it's super validating as an amateur to find that the pros have been thinking along the same track!

The second part of the book is about some ways in which people with different ancestries are different from each other! Obviously, there are no "distinct" "races" (that would be dumb), but it turns out (as found by endeavors such as Li et al. 2008) that when you throw clustering and dimensionality-reduction algorithms at SNP data (single nucleotide polymorphisms, places in the genome where more than one allele has non-negligible frequency), you get groupings that are a pretty good match to classical or self-identified "races".

Ask the computer to assume that an individual's ancestry came from K fictive ancestral populations where K := 2, and it'll infer that sub-Saharan Africans are descended entirely from one, East Asians and some native Americans are descended entirely from the other, and everyone else is an admixture. But if you set K := 3, populations from Europe and the near East (which were construed as admixtures in the K := 2 model) split off as a new inferred population cluster. And so on.

These ancestry groupings are a "construct" in the sense that the groupings aren't "ordained by God"—the algorithm can find K groupings for your choice of K—but where it draws those category boundaries is a function of the data. The construct is doing cognitive work, concisely summarizing statistical regularities in the dataset (which is too large for humans to hold in their heads all at once): a map that reflects a territory.

Twentieth-century theorists like Fisher and Haldane and whatshisface-the-guinea-pig-guy had already figured out a lot about how evolution works (stuff like, a mutation that confers a fitness advantage of s has a probability of about 2s of sweeping to fixation), but a lot of hypotheses about recent human evolution weren't easy to test or even formulate until the genome was sequenced!

You might think that there wasn't enough time in the 2–5k generations since we came forth out of Africa for much human evolution to take place: a new mutation needs to confer an unusually large benefit to sweep to fixation that fast. But what if you didn't actually need any new mutations? Natural selection on polygenic traits can also act on "standing variation": variation already present in the population that was mostly neutral in previous environments, but is fitness-relevant to new selection pressures. The rapid response to selective breeding observed in domesticated plants and animals mostly doesn't depend on new mutations.

Another mechanism of recent human evolution is introgression: early humans interbred with our Neanderthal and Denisovan "cousins", giving our lineage the chance to "steal" all their good alleles! In contrast to new mutations, which usually die out even when they're beneficial (that 2s rule again), alleles "flowing" from another population keep getting reintroduced, giving them more chances to sweep!

Population differences are important when working with genome-wide association studies, because a model "trained on" one population won't perform as well against the "test set" of a different population. Suppose you do a big study and find a bunch of SNPs that correlate with a trait, like schizophrenia or liking opera. The frequencies of those SNPs for two populations from the same continent (like Japanese and Chinese) will hugely correlate (Pearson's r ≈ 0.97), but for more genetically-distant populations from different continents, the correlation will still be big but not huge (like r ≈ 0.8 or whatever).

What do these differences in SNP frequencies mean in practice?? We ... don't know yet. At least some population differences are fairly well-understood: I'd tell you about sickle-cell and lactase persistence, except then I would have to scream. There are some cases where we see populations independently evolve different adaptations that solve the same problem: people living on the plateaus of both Tibet and Peru have both adapted to high altitudes, but the Tibetans did it by breathing faster and the Peruvians did it with more hemoglobin!

Sorry, "the Tibetans did it with ..." is sloppy phrasing on my part; what I actually mean is that the Tibetans who weren't genetically predisposed to breathe faster were more likely to die without leaving children behind. That's how evolution works!

The third part of the book is about genetic influences on class structure! Untangling the true causes of human variation is a really hard technical philosophy problem, but behavioral geneticists have at least gotten started with their simple ACE model. It works like this: first, assume (that is, "pretend") that the genetic variation for a trait is additive (if you have the appropriate SNP, you get more of the trait), rather than exhibiting epistasis (where the effects of different loci interfere with each other) or Mendelian dominance (where the presence of just one copy of an allele (of two) determines the phenotype, and it doesn't matter whether you heterozygously have a different allele as your second version of that gene). Then we pretend that we can partition the variance in phenotypes as the sum of the "additive" genetic variance A, plus the environmental variance "common" within a family C, plus "everything else" (including measurement "error" and the not-shared-within-families "environment") E. Briefly (albeit at the risk of being cliché): nature, nurture, and noise.

Then we can estimate the sizes of the A, C, and E components by studying fraternal and identical twins. (If you hear people talking about "twin studies", this is what they mean—not case studies of identical twins raised apart, which are really cool but don't happen very often.) Both kinds of twins have the same family environment C at the same time (parents, socioeconomic status, schools, &c.), but identical twins are twice as genetically related to each other as fraternal twins, so the extent to which the identical twins are more similar is going to pretty much be because of their genes. "Pretty much" in the sense that while there are ways in which the assumptions of the model aren't quite true (assortative mating makes fraternal twins more similar in the ways their parents were already similar before mating, identical twins might get treated more similarly by "the environment" on account of their appearance), Murray assures us that the experts assure us that the quantitative effect of these deviations are probably pretty small!

Anyway, it turns out that the effect of the shared environment C for most outcomes is smaller than most people intuitively expect—actually close to zero for personality and adult intelligence specifically! Sometimes sloppy popularizers summarize this as "parenting doesn't matter" in full generality, but it depends on the trait or outcome you're measuring: for example, the shared environment component gets up to 25% for years-of-schooling ("educational attainment") and 36% for "basic interpersonal interactions." Culture obviously exists, but for underlying psychological traits, the part of the environment that matters is mostly not shared by siblings in the same family—not the part of the environment we know how to control. Thus, a lot of economic and class stratification actually ends up being along genetic lines: the nepotism of family wealth can buy opportunities and second chances, but it doesn't actually live your life for you.

It's important not to overinterpret the heritability results; there are a bunch of standard caveats that go here that everyone's treatment of the topic needs to include! Heritability is about the variance in phenotypes that can be predicted by variance in genes. This is not the same concept as "controlled by genes." To see this, notice that the trait "number of heads" has a heritability of zero because the variance is zero: all living people have exactly one head. (Siamese twins are two people.) Heritability estimates are also necessarily bound to a particular population in a particular place and time, which can face constraints shaped solely by the environment. If you plant half of a batch of seeds in the shade and half in the sun, the variance in the heights of the resulting plants will be associated with variance in genes within each group, but the difference between the groups is solely determined by the sunniness of their environments. Likewise, in a Society with a cruel caste system under which children with red hair are denied internet access, part of the heritability of intellectual achievement is going to come from alleles that code for red hair. Even though (ex hypothesi) redheads have the same inherent intellectual potential as everyone else, the heritability computation can't see into worlds that are not our own, which might have vastly different gene–environment correlations.

(I speculate that heritability calculations being so Society-bound might help make sense of the "small role of the shared environment" results that many still balk at. If the population you're studying goes to public schools—or schools at all, as contrasted to other ways of living and learning—that could suppress a lot of the variance that might otherwise occur in families.)

Old-timey geneticists used to think that they would find small number of "genes for" something, but it turns out that we live in an omnigenetic, pleiotropic world where lots and lots of SNPs each exert a tiny effect on potentially lots and lots of things. I feel like this probably shouldn't have been surprising (genes code for amino-acid sequences, variation in what proteins get made from those amino-acid sequences is going to affect high-level behaviors, but high-level behaviors involve lots of proteins in a super-complicated unpredictable way), but I guess it was.

Murray's penultimate chapter summarizes the state of a debate between a "Robert Plomin school" and an "Eric Turkheimer school" on the impact and import of polygenic scores, where we tally up all the SNPs someone has that are associated with a trait of interest.

The starry-eyed view epitomized by Plomin says that polygenic scores are super great and everyone and her dog should be excited about them: they're causal in only one direction (the trait can't cause the score) and they let us assess risks in individuals before they happen. Clinical psychology will enter a new era of "positive genomics", where we understand how to work with the underlying dimensions along which people vary (including positively), rather than focusing on treating "diagnoses" that people allegedly "have".

The curmudgeonly view epitomized by Turkheimer says that Science is about understanding the causal structure of phenomena, and that polygenic scores don't fucking tell us anything. Marital status is heritable in the same way that intelligence is heritable, not because there are "divorce genes" in any meaningful biological sense, but because of a "universal, nonspecific genetic pull on everything": on average, people with more similar genes will make more similar proteins from those similar genes, and therefore end up with more similar phenotypes that interact with the environment in a more similar way, and eventually (the causality flowing "upwards" through many hierarchical levels of organization) this shows up in the divorce statistics of a particular Society in a particular place and time. But this is opaque and banal; the real work of Science is in figuring out what all the particular gene variations actually do.

Notably, Plomin and Turkheimer aren't actually disagreeing here: it's a difference in emphasis rather than facts. Polygenic scores don't explain mechanisms—but might they end up being useful, and used, anyway? Murray's vision of social science is content to make predictions and "explain variance" while remaining ignorant of ultimate causality. (Murray compares polygenic scores to "economic indexes predicting GDP growth", which is not necessarily a reassuring analogy to those who doubt how much of GDP represents real production rather than the "exhaust heat" of zero-sum contests in an environment of manufactured scarcity and artificial demand.) Meanwhile, my cursory understanding (while kicking myself for still not having put in the hours to get much farther into Probabilistic Graphical Models: Principles and Techniques) was that you need to understand causality in order to predict what interventions will have what effects: variance in rain may be statistically "explained by" variance in mud puddles, but you can't make it rain by turning the hose on. Maybe our feeble state of knowledge is why we don't know how to find reliable large-effect environmental interventions that still yet might exist in the vastness of the space of possible interventions.

There are also some appendices at the back of the book! Appendix 1 (reproduced from, um, one of Murray's earlier books with a coauthor) explains some basic statistics concepts. Appendix 2 ("Sexual Dimorphism in Humans") goes over the prevalence of intersex conditions and gays, and then—so much for this post broadening the topic scope of this blog—transgender typology! Murray presents the Blanchard–Bailey–Lawrence–Littman view as fact, which I think is basically correct, but a more comprehensive treatment (which I concede may be too much too hope for from a mere Appendix) would have at least mentioned alternative views (Serano? Veale?), if only to explain why they're worth dismissing. (Contrast to the eight pages in the main text explaining why "But, but, epigenetics!" is worth dismissing.) Then Appendix 3 ("Sex Differences in Brain Volumes and Variance") has tables of brain-size data, and an explanation of the greater-male-variance hypothesis. Cool!


... and that's the book review that I would prefer to write. A science review of a science book, for science nerds: the kind of thing that would have no reason to draw your attention if you're not genuinely interested in Mahanalobis D effect sizes or adaptive introgression or Falconer's formulas, for their own sake, or (better) for the sake of compressing the length of the message needed to encode your observations.

But that's not why you're reading this. That's not why Murray wrote the book. That's not even why I'm writing this. We should hope—emphasis on the should—for a discipline of Actual Social Science, whose practitioners strive to report the truth, the whole truth, and nothing but the truth, with the same passionately dispassionate objectivity they might bring to the study of beetles, or algebraic topology—or that an alien superintelligence might bring to the study of humans.

We do not have a discipline of Actual Social Science. Possibly because we're not smart enough to do it, but perhaps more so because we're not smart enough to want to do it. No one has an incentive to lie about the homotopy groups of an n-sphere. If you're asking questions about homotopy groups at all, you almost certainly care about getting the right answer for the right reasons. At most, you might be biased towards believing your own conjectures in the optimistic hope of achieving eternal algebraic-topology fame and glory, like Ruth Lawrence. But nothing about algebraic topology is going to be morally threatening in a way that will leave you fearing that your ideological enemies have seized control of the publishing-houses to plant lies in the textbooks to fuck with your head, or sobbing that a malicious God created the universe as a place of evil.

Okay, maybe that was a bad example; topology in general really is the kind of mindfuck that might be the design of an adversarial agency. (Remind me to tell you about the long line, which is like the line of real numbers, except much longer.)

In any case, as soon as we start to ask questions about humans—and far more so identifiable groups of humans—we end up entering the domain of politics.

We really shouldn't. Everyone should perceive a common interest in true beliefs—maps that reflect the territory, simple theories that predict our observations—because beliefs that make accurate predictions are useful for making good decisions. That's what "beliefs" are for, evolutionary speaking: my analogues in humanity's environment of evolutionary adaptedness were better off believing that (say) the berries from some bush were good to eat if and only if the berries were actually good to eat. If my analogues unduly-optimistically thought the berries were good when they actually weren't, they'd get sick (and lose fitness), but if they unduly-pessimistically thought the berries were not good when they actually were, they'd miss out on valuable calories (and fitness).

(Okay, this story is actually somewhat complicated by the fact that evolution didn't "figure out" how to build brains that keep track of probability and utility separately: my analogues in the environment of evolutionary adaptedness might also have been better off assuming that a rustling in the bush was a tiger, even if it usually wasn't a tiger, because failing to detect actual tigers was so much more costly (in terms of fitness) than erroneously "detecting" an imaginary tiger. But let this pass.)

The problem is that, while any individual should always want true beliefs for themselves in order to navigate the world, you might want others to have false beliefs in order to trick them into mis-navigating the world in a way that benefits you. If I'm trying to sell you a used car, then—counterintuitively—I might not want you to have accurate beliefs about the car, if that would reduce the sale price or result in no deal. If our analogues in the environment of evolutionary adaptedness regularly faced structurally similar situations, and if it's expensive to maintain two sets of beliefs (the real map for ourselves, and a fake map for our victims), we might end up with a tendency not just to be lying motherfuckers who deceive others, but also to self-deceive in situations where the payoffs (in fitness) of tricking others outweighed those of being clear-sighted ourselves.

That's why we're not smart enough to want a discipline of Actual Social Science. The benefits of having a collective understanding of human behavior—a shared map that reflects the territory that we are—could be enormous, but beliefs about our own qualities, and those of socially-salient groups to which we belong (e.g., sex, race, and class) are exactly those for which we face the largest incentive to deceive and self-deceive. Counterintuitively, I might not want you to have accurate beliefs about the value of my friendship (or the disutility of my animosity), for the same reason that I might not want you to have accurate beliefs about the value of my used car. That makes it a lot harder not just to get the right answer for the reasons, but also to trust that your fellow so-called "scholars" are trying to get the right answer, rather than trying to sneak self-aggrandizing lies into the shared map in order to fuck you over. You can't just write a friendly science book for oblivious science nerds about "things we know about some ways in which people are different from each other", because almost no one is that oblivious. To write and be understood, you have to do some sort of positioning of how your work fits in to the war over the shared map.

Murray positions Human Diversity as a corrective to a "blank slate" orthodoxy that refuses to entertain any possibility of biological influences on psychological group differences. The three parts of the book are pitched not simply as "stuff we know about biologically-mediated group differences" (the oblivious-science-nerd approach that I would prefer), but as a rebuttal to "Gender Is a Social Construct", "Race Is a Social Construct", and "Class Is a Function of Privilege." At the same time, however, Murray is careful to position his work as nonthreatening: "there are no monsters in the closet," he writes, "no dread doors that we must fear opening." He likewise "state[s] explicitly that [he] reject[s] claims that groups of people, be they sexes or races or classes, can be ranked from superior to inferior [or] that differences among groups have any relevance to human worth or dignity."

I think this strategy is sympathetic but ultimately ineffective. Murray is trying to have it both ways: challenging the orthodoxy, while denying the possibility of any unfortunate implications of the orthodoxy being false. It's like ... theistic evolution: satisfactory as long as you don't think about it too hard, but among those with a high need for cognition, who know what it's like to truly believe (as I once believed), it's not going to convince anyone who hasn't already broken from the orthodoxy.

Murray concludes, "Above all, nothing we learn will threaten human equality properly understood." I strongly agree with the moral sentiment, the underlying axiology that makes this seem like a good and wise thing to say.

And yet I have been ... trained. Trained to instinctively apply my full powers of analytical rigor and skepticism to even that which is most sacred. Because my true loyalty is to the axiology—to the process underlying my current best guess as to that which is most sacred. If that which was believed to be most sacred turns out to not be entirely coherent ... then we might have some philosophical work to do, to reformulate the sacred moral ideal in a way that's actually coherent.

"Nothing we learn will threaten X properly understood." When you elide the specific assignment X := "human equality", the form of this statement is kind of suspicious, right? Why "properly understood"? It would be weird to say, "Nothing we learn will threaten the homotopy groups of an n-sphere properly understood."

This kind of claim to be non-disprovable seems like the kind of thing you would only invent if you were secretly worried about X being threatened by new discoveries, and wanted to protect your ability to backtrack and re-gerrymander your definition of X to protect what you (think that you) currently believe.

If being an oblivious science nerd isn't an option, half-measures won't suffice. I think we can do better by going meta and analyzing the functions being served by the constraints on our discourse and seeking out clever self-aware strategies for satisfying those functions without lying about everything. We mustn't fear opening the dread meta-door in front of whether there actually are dread doors that we must fear opening.

Why is the blank slate doctrine so compelling, that so many feel the need to protect it at all costs? (As I once felt the need.) It's not ... if you've read this far, I assume you will forgive me—it's not scientifically compelling. If you were studying humans the way an alien superintelligence would, trying to get the right answer for the right reasons (which can conclude conditional answers: if what humans are like depends on choices about what we teach our children, then there will still be a fact of the matter as to what choices lead to what outcomes), you wouldn't put a whole lot of prior probability on the hypothesis "Both sexes and all ancestry-groupings of humans have the same distribution of psychological predispositions; any observed differences in behavior are solely attributable to differences in their environments." Why would that be true? We know that sexual dimorphism exists. We know that reproductively isolated populations evolve different traits to adapt to their environments, like those birds with differently-shaped beaks that Darwin saw on his boat trip. We could certainly imagine that none of the relevant selection pressures on humans happened to touch the brain—but why? Wouldn't that be kind of a weird coincidence?

If the blank slate doctrine isn't scientifically compelling—it's not something you would invent while trying to build shared maps that reflect the territory—then its appeal must have something to do with some function it plays in conflicts over the shared map, where no one trusts each other to be doing Actual Social Science rather than lying to fuck everyone else over.

And that's where the blank slate doctrine absolutely shines—it's the Schelling point for preventing group conflicts! (A Schelling point is a choice that's salient as a focus for mutual expectations: what I think that you think that I think ... &c. we'll choose.) If you admit that there could be differences between groups, you open up the questions of in what exact traits and of what exact magnitudes, which people have an incentive to lie about to divert resources and power to their group by establishing unfair conventions and then misrepresenting those contingent bargaining equilibria as some "inevitable" natural order.

If you're afraid of purported answers being used as a pretext for oppression, you might hope to make the question un-askable. Can't oppress people on the basis of race if race doesn't exist! Denying the existence of sex is harder—which doesn't stop people from occasionally trying. "I realize I am writing in an LGBT era when some argue that 63 distinct genders have been identified," Murray notes at the beginning of Appendix 2. But this oblique acerbity fails to pass the Ideological Turing Test. The language of has been identified suggests an attempt at scientific taxonomy—a project, which I share with Murray, of fitting categories to describe a preexisting objective reality. But I don't think the people making 63-item typeahead select "Gender" fields for websites are thinking in such terms to begin with. The specific number 63 is ridiculous and can't exist; it might as well be, and often is, a fill-in-the-blank free text field. Despite being insanely evil (where I mean the adjective literally rather than as a generic intensifier—evil in a way that is of or related to insanity), I must acknowledge this is at least good game theory. If you don't trust taxonomists to be acting in good faith—if you think we're trying to bulldoze the territory to fit a preconceived map—then destroying the language that would be used to be build oppressive maps is a smart move.

The taboo mostly only applies to psychological trait differences, both because those are a sensitive subject, and because they're easier to motivatedly see what you want to see: whereas things like height or skin tone can be directly seen and uncontroversially measured with well-understood physical instruments (like a meterstick or digital photo pixel values), psychological assessments are much more complicated and therefore hard to detach from the eye of the beholder. (If I describe Mary as "warm, compassionate, and agreeable", the words mean something in the sense that they change what experiences you anticipate—if you believed my report, you would be surprised if Mary were to kick your dog and make fun of your nose job—but the things that they mean are a high-level statistical signal in behavior for which we don't have a simple measurement device like a meterstick to appeal to if you and I don't trust each other's character assessments of Mary.)

Notice how the "not allowing sex and race differences in psychological traits to appear on shared maps is the Schelling point for resistance to sex- and race-based oppression" actually gives us an explanation for why one might reasonably have a sense that there are dread doors that we must not open. Undermining the "everyone is Actually Equal" Schelling point could catalyze a preference cascade—a slide down the slippery slope to the the next Schelling point, which might be a lot worse than the status quo on the "amount of rape and genocide" metric, even if it does slightly better on "estimating heritability coefficients." The orthodoxy isn't just being dumb for no reason. In analogy, Galileo and Darwin weren't trying to undermine Christianity—they had much more interesting things to think about—but religious authorities were right to fear heliocentrism and evolution: if the prevailing coordination equilibrium depends on lies, then telling the truth is a threat and it is disloyal. And if the prevailing coordination equilibrium is basically good, then you can see why purported truth-tellers striking at the heart of the faith might be believed to be evil.

Murray opens the parts of the book about sex and race with acknowledgments of the injustice of historical patriarchy ("When the first wave of feminism in the United States got its start [...] women were rebelling not against mere inequality, but against near-total legal subservience to men") and racial oppression ("slavery experienced by Africans in the New World went far beyond legal constraints [...] The freedom granted by emancipation in America was only marginally better in practice and the situation improved only slowly through the first half of the twentieth century"). It feels ... defensive? (To his credit, Murray is generally pretty forthcoming about how the need to write "defensively" shaped the book, as in a sidebar in the introduction that says that he'd prefer to say a lot more about evopsych, but he chose to just focus on empirical findings in order to avoid the charge of telling just-so stories.)

But this kind of defensive half-measure satisfies no one. From the oblivious-science-nerd perspective—the view that agrees with Murray that "everyone should calm down"—you shouldn't need to genuflect to the memory of some historical injustice before you're allowed to talk about Science. But from the perspective that cares about Justice and not just Truth, an insincere gesture or a strategic concession is all the more dangerous insofar as it could function as camouflage for a nefarious hidden agenda. If your work is explicitly aimed at destroying the anti-oppression Schelling-point belief, a few hand-wringing historical interludes and bromides about human equality having no testable implications (!!) aren't going to clear you of the suspicion that you're doing it on purpose—trying to destroy the anti-oppression Schelling point in order to oppress, and not because anything that can be destroyed by the truth, should be.

And sufficient suspicion makes communication nearly impossible. (If you know someone is lying, their words mean nothing, not even as the opposite of the truth.) As far as many of Murray's detractors are concerned, it almost doesn't matter what the text of Human Diversity says, how meticulously researched of a psychology/neuroscience/genetics lit review it is. From their perspective, Murray is "hiding the ball": they're not mad about this book; they're mad about specifically chapters 13 and 14 of a book Murray coauthored twenty-five years ago. (I don't think I'm claiming to be a mind-reader here; the first 20% of The New York Times's review of Human Diversity is pretty explicit and representative.)

In 1994's The Bell Curve: Intelligence and Class Structure in American Life, Murray and coauthor Richard J. Herrnstein argued that a lot of variation in life outcomes is explained by variation in intelligence. Some people think that folk concepts of "intelligence" or being "smart" are ill-defined and therefore not a proper object of scientific study. But that hasn't stopped some psychologists from trying to construct tests purporting to measure an "intelligence quotient" (or IQ for short). It turns out that if you give people a bunch of different mental tests, the results all positively correlate with each other: people who are good at one mental task, like listening to a list of numbers and repeating them backwards ("reverse digit span"), are also good at others, like knowing what words mean ("vocabulary"). There's a lot of fancy linear algebra involved, but basically, you can visualize people's test results as a hyperellipsoid in some high-dimensional space where the dimensions are the different tests. (I rely on this "configuration space" visual metaphor so much for so many things that when I started my secret ("secret") gender blog, it felt right to put it under a .space TLD.) The longest axis of the hyperellipsoid corresponds to the "g factor" of "general" intelligence—the choice of axis that cuts through the most variance in mental abilities.

It's important not to overinterpret the g factor as some unitary essence of intelligence rather than the length of a hyperellipsoid. It seems likely that if you gave people a bunch of physical tests, they would positively correlate with each other, such that you could extract a "general factor of athleticism". (It would be really interesting if anyone's actually done this using the same methodology used to construct IQ tests!) But athleticism is going to be an very "coarse" construct for which the tails come apart: for example, world champion 100-meter sprinter Usain Bolt's best time in the 800 meters is reportedly only around 2:10 or 2:07! (For comparison, I ran a 2:08.3 in high school once!)

Anyway, so Murray and Herrnstein talk about this "intelligence" construct, and how it's heritable, and how it predicts income, school success, not being a criminal, &c., and how Society is becoming increasingly stratified by cognitive abilities, as school credentials become the ticket to the new upper class.

This should just be more social-science nerd stuff, the sort of thing that would only draw your attention if, like me, you feel bad about not being smart enough to do algebraic topology and want to console yourself by at least knowing about the Science of not being smart enough to do algebraic topology. The reason everyone and her dog is still mad at Charles Murray a quarter of a century later is Chapter 13, "Ethnic Differences in Cognitive Ability", and Chapter 14, "Ethnic Inequalities in Relation to IQ". So, apparently, different ethnic/"racial" groups have different average scores on IQ tests. Ashkenazi Jews do the best, which is why I sometimes privately joke that the fact that I'm only 85% Ashkenazi (according to 23andMe) explains my low IQ. (I got a 131 on the WISC-III at age 10, but that's pretty dumb compared to some of my robot-cult friends.) East Asians do a little better than Europeans/"whites". And—this is the part that no one is happy about—the difference between U.S. whites and U.S. blacks is about Cohen's d ≈ 1. (If two groups differ by d = 1 on some measurement that's normally distributed within each group, that means that the mean of the group with the lower average measurement is at the 16th percentile of the group with the higher average measurement, or that a uniformly-randomly selected member of the group with the higher average measurement has a probability of about 0.76 of having a higher measurement than a uniformly-randomly selected member of the group with the lower average measurement.)

Given the tendency for people to distort shared maps for political reasons, you can see why this is a hotly contentious line of research. Even if you take the test numbers at face value, racists trying to secure unjust privileges for groups that score well, have an incentive to "play up" group IQ differences in bad faith even when they shouldn't be relevant. As economist Glenn C. Loury points out in The Anatomy of Racial Inequality, cognitive abilities decline with age, and yet we don't see a moral panic about the consequences of an aging workforce, because older people are construed by the white majority as an "us"—our mothers and fathers—rather than an outgroup. Individual differences in intelligence are also presumably less politically threatening because "smart people" as a group aren't construed as a natural political coalition—although Murray's work on cognitive class stratification would seem to suggest this intuition is mistaken.

It's important not to overinterpret the IQ-scores-by-race results; there are a bunch of standard caveats that go here that everyone's treatment of the topic needs to include. Again, just because variance in a trait is statistically associated with variance in genes within a population, does not mean that differences in that trait between populations are caused by genes: remember the illustrations about sun-deprived plants and internet-deprived red-haired children. Group differences in observed tested IQs are entirely compatible with a world in which those differences are entirely due to the environment imposed by an overtly or structurally racist society. Maybe the tests are culturally biased. Maybe people with higher socioeconomic status get more opportunities to develop their intellect, and racism impedes socio-economic mobility. And so on.

The problem is, a lot of the blank-slatey environmentally-caused-differences-only hypotheses for group IQ differences start to look less compelling when you look into the details. "Maybe the tests are biased", for example, isn't an insurmountable defeater to the entire endeavor of IQ testing—it is itself a falsifiable hypothesis, or can become one if you specify what you mean by "bias" in detail. One idea of what it would mean for a test to be biased is if it's partially measuring something other than what it purports to be measuring: if your test measures a combination of "intelligence" and "submission to the hegemonic cultural dictates of the test-maker", then individuals and groups that submit less to your cultural hegemony are going to score worse, and if you market your test as unbiasedly measuring intelligence, then people who believe your marketing copy will be misled into thinking that those who don't submit are dumber than they really are. But if so, and if not all of your individual test questions are equally loaded on intelligence and cultural-hegemony, then the cultural bias should show up in the statistics. If some questions are more "fair" and others are relatively more culture-biased, then you would expect the order of item difficulties to differ by culture: the "item characteristic curve" plotting the probability of getting a biased question "right" as a function of overall test score should differ by culture, with the hegemonic group finding it "easier" and others finding it "harder". Conversely, if the questions that discriminate most between differently-scoring cultural/ethnic/"racial" groups were the same as the questions that discriminate between (say) younger and older children within each group, that would be the kind of statistical clue you would expect to see if the test was unbiased and the group difference was real.

Hypotheses that accept IQ test results as unbiased, but attribute group differences in IQ to the environment, also make statistical predictions that could be falsified. Controlling for parental socioeconomic status only cuts the black–white gap by a third. (And note, on the hereditarian model, some of the correlation between parental SES and child outcomes is due to both being causally downstream of genes.) The mathematical relationship between between-group and within-group heritability means that the conjunction of wholly-environmentally-caused group differences, and the within-group heritability, makes quantitative predictions about how much the environments of the groups differ. Skin color is actually only controlled by a small number of alleles, so if you think Society's discrimination on skin color causes IQ differences, you could maybe design a clever study that measures both overall-ancestry and skin color, and does statistics on what happens when they diverge. And so on.

In mentioning these arguments in passing, I'm not trying to provide a comprehensive lit review on the causality of group IQ differences. (That's someone else's blog.) I'm not (that?) interested in this particular topic, and without having mastered the technical literature, my assessment would be of little value. Rather, I am ... doing some context-setting for the problem I am interested in, of fixing public discourse. The reason we can't have an intellectually-honest public discussion about human biodiversity is because good people want to respect the anti-oppression Schelling point and are afraid of giving ammunition to racists and sexists in the war over the shared map. "Black people are, on average, genetically less intelligent than white people" is the kind of sentence that pretty much only racists would feel good about saying out loud, independently of its actual truth value. In a world where most speech is about manipulating shared maps for political advantage rather than getting the right answer for the right reasons, it is rational to infer that anyone who entertains such hypotheses is either motivated by racial malice, or is at least complicit with it—and that rational expectation isn't easily canceled with a pro forma "But, but, civil discourse" or "But, but, the true meaning of Equality is unfalsifiable" disclaimer.

To speak to those who aren't already oblivious science nerds—or are committed to emulating such, as it is scientifically dubious whether anyone is really that oblivious—you need to put more effort into your excuse for why you're interested in these topics. Here's mine, and it's from the heart, though it's up to the reader to judge for herself how credible I am when I say this—

I don't want to be complicit with hatred or oppression. I want to stay loyal to the underlying egalitarian–individualist axiology that makes the blank slate doctrine sound like a good idea. But I also want to understand reality, to make sense of things. I want a world that's not lying to me. Having to believe false things—or even just not being able say certain true things when they would otherwise be relevant—extracts a dire cost on our ability to make sense of the world, because you can't just censor a few forbidden hypotheses—you have to censor everything that implies them, and everything that implies them: the more adept you are at making logical connections, the more of your mind you need to excise to stay in compliance.

We can't talk about group differences, for fear that anyone arguing that differences exist is just trying to shore up oppression. But ... structural oppression and actual group differences can both exist at the same time. They're not contradicting each other! Like, the fact that men are physically stronger than women (on average, but the effect size is enormous, like d ≈ 2.6 for total muscle mass) is not unrelated to the persistence of patriarchy! (The ability to credibly threaten to physically overpower someone, gives the more powerful party a bargaining advantage, even if the threat is typically unrealized.) That doesn't mean patriarchy is good; to think so would be to commit the naturalistic fallacy of attempting to derive an ought from an is. No one would say that famine and plague are good just because they, too, are subject to scientific explanation. This is pretty obvious, really? But similarly, genetically-mediated differences in cognitive repertoires between ancestral populations are probably going to be part of the explanation for why we see the particular forms of inequality and oppression that we do, just as a brute fact of history devoid of any particular moral significance, like how part of the explanation for why European conquest of the Americas happened earlier and went smoother for the invaders than the colonization of Africa, had to do with the disease burden going the other way (Native Americans were particularly vulnerable to smallpox, but Europeans were particularly vulnerable to malaria).

Again—obviously—is does not imply ought. In deference to the historically well-justified egalitarian fear that such hypotheses will primarily be abused by bad actors to portray their own group as "superior", I suspect it's helpful to dwell on science-fictional scenarios in which the boot of history is one's own neck, if the boot does not happen to be on one's own neck in real life. If a race of lavender humans from an alternate dimension were to come through a wormhole and invade our Earth and cruelly subjugate your people, you would probably be pretty angry, and maybe join a paramilitary group aimed at overthrowing lavender supremacy and re-instantiating civil rights. The possibility of a partially-biological explanation for why the purple bastards discovered wormhole generators when we didn't (maybe they have d ≈ 1.8 on us in visuospatial skills, enabling their population to be first to "roll" a lucky genius (probably male) who could discover the wormhole field equations), would not make the conquest somehow justified.

I don't know how to build a better world, but it seems like there are quite general grounds on which we should expect that it would be helpful to be able to talk about social problems in the language of cause and effect, with the austere objectivity of an engineering discipline. If you want to build a bridge (that will actually stay up), you need to study the "the careful textbooks [that] measure [...] the load, the shock, the pressure [that] material can bear." If you want to build a just Society (that will actually stay up), you need a discipline of Actual Social Science that can publish textbooks, and to get that, you need the ability to talk about basic facts about human existence and make simple logical and statistical inferences between them.

And no one can do it! ("Well for us, if even we, even for a moment, can get free our heart, and have our lips unchained—for that which seals them hath been deep-ordained!") Individual scientists can get results in their respective narrow disciplines; Charles Murray can just barely summarize the science to a semi-popular audience without coming off as too overtly evil to modern egalitarian moral sensibilities. (At least, the smarter egalitarians? Or, maybe I'm just old.) But at least a couple aspects of reality are even worse (with respect to naïve, non-renormalized egalitarian moral sensibilities) than the ball-hiders like Murray can admit, having already blown their entire Overton budget explaining the relevant empirical findings.

Murray approvingly quotes Steven Pinker (a fellow ball-hider, though Pinker is better at it): "Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group."

A fine sentiment. I emphatically agree with the underlying moral intuition that makes "Individuals should not be judged by group membership" sound like a correct moral principle—one cries out at the monstrous injustice of the individual being oppressed on the basis of mere stereotypes of what other people who look like them might statistically be like.

But can I take this literally as the exact statement of a moral principle? Technically?—no! That's actually not how epistemology works! The proposed principle derives its moral force from the case of complete information: if you know for a fact that I have moral property P, then it would be monstrously unjust to treat me differently just because other people who look like me mostly don't have moral property P. But in the real world, we often—usually—don't have complete information about people, or even about ourselves.

Bayes's theorem (just a few inferential steps away from the definition of conditional probability itself, barely worthy of being called a "theorem") states that for hypothesis H and evidence E, P(H|E) = P(E|H)P(H)/P(E). This is the fundamental equation that governs all thought. When you think you see a tree, that's really just your brain computing a high value for the probability of your sensory experiences given the hypothesis that there is a tree, multiplied by the prior probability that there is a tree, as a fraction of all the possible worlds that could be generating your sensory experiences.

What goes for seeing trees, goes the same for "treating individuals as individuals": the process of getting to know someone as an individual, involves your brain exploiting the statistical relationships between what you observe, and what you're trying to learn about. If you see someone wearing an Emacs tee-shirt, you're going to assume that they probably use Emacs, and asking them about their dot-emacs file is going to seem like a better casual conversation-starter compared to the base rate of people wearing non-Emacs shirts. Not with certainty—maybe they just found the shirt in a thrift store and thought it looked cool—but the shirt shifts the probabilities implied by your decisionmaking.

The problem that Bayesian reasoning poses for naïve egalitarian moral intuitions, is that, as far as I can tell, there's no philosophically principled reason for "probabilistic update about someone's psychology on the evidence that they're wearing an Emacs shirt" to be treated fundamentally differently from "probabilistic update about someone's psychology on the evidence that she's female". These are of course different questions, but to a Bayesian reasoner (an inhuman mathematical abstraction for getting the right answer and nothing else), they're the same kind of question: the correct update to make is an empirical matter that depends on the actual distribution of psychological traits among Emacs-shirt-wearers and among women. (In the possible world where most people wear tee-shirts from the thrift store that looked cool without knowing what they mean, the "Emacs shirt → Emacs user" inference would usually be wrong.) But to a naïve egalitarian, judging someone on their expressed affinity for Emacs is good, but judging someone on their sex is bad and wrong.

I used to be a naïve egalitarian. I was very passionate about it. I was eighteen years old. I am—again—still fond of the moral sentiment, and eager to renormalize it into something that makes sense. (Some egalitarian anxieties do translate perfectly well into the Bayesian setting, as I'll explain in a moment.) But the abject horror I felt at eighteen at the mere suggestion of making generalizations about people just—doesn't make sense. It's not even that it shouldn't be practiced (it's not that my heart wasn't in the right place), but that it can't be practiced—that the people who think they're practicing it are just confused about how their own minds work.

Give people photographs of various women and men and ask them to judge how tall the people in the photos are, as Nelson et al. 1990 did, and people's guesses reflect both the photo-subjects' actual heights, but also (to a lesser degree) their sex. Unless you expect people to be perfect at assessing height from photographs (when they don't know how far away the cameraperson was standing, aren't "trigonometrically omniscient", &c.), this behavior is just correct: men really are taller than women on average, so P(true-height|apparent-height, sex) ≠ P(true-height|apparent-height) because of regression to the mean (and women and men regress to different means). But this all happens subconsciously: in the same study, when the authors tried height-matching the photographs (for every photo of a woman of a given height, there was another photo in the set of a man of the same height) and telling the participants about the height-matching and offering a cash reward to the best height-judge, more than half of the stereotyping effect remained. It would seem that people can't consciously readjust their learned priors in reaction to verbal instructions pertaining to an artificial context.

Once you understand at a technical level that probabilistic reasoning about demographic features is both epistemically justified, and implicitly implemented as part of the way your brain processes information anyway, then a moral theory that forbids this starts to look less compelling? Of course, statistical discrimination on demographic features is only epistemically justified to exactly the extent that it helps get the right answer. Renormalized-egalitarians can still be properly outraged about the monstrous tragedies where I have moral property P but I can't prove it to you, so you instead guess incorrectly that I don't just because other people who look like me mostly don't, and you don't have any better information to go on—or tragedies in which a feedback loop between predictions and social norms creates or amplifies group differences that wouldn't exist under some other social equilibrium.

Nelson et al. also found that when the people in the photographs were pictured sitting down, then judgments of height depended much more on sex than when the photo-subjects were standing. This too makes Bayesian sense: if it's harder to tell how tall an individual is when they're sitting down, you rely more on your demographic prior. In order to reduce injustice to people who are an outlier for their group, one could argue that there is a moral imperative to seek out interventions to get more fine-grained information about individuals, so that we don't need to rely on the coarse, vague information embodied in demographic stereotypes. The moral spirit of egalitarian–individualism mostly survives in our efforts to hug the query and get specific information with which to discriminate amongst individuals. (And discriminateto distinguish, to make distinctions—is the correct word.) If you care about someone's height, it is better to precisely measure it using a meterstick than to just look at them standing up, and it is better to look at them standing up than to look at them sitting down. If you care about someone's skills as potential employee, it is better to give them a work-sample test that assesses the specific skills that you're interested in, than it is to rely on a general IQ test, and it's far better to use an IQ test than to use mere stereotypes. If our means of measuring individuals aren't reliable or cheap enough, such that we still end up using prior information from immutable demographic categories, that's a problem of grave moral seriousness—but in light of the mathematical laws governing reasoning under uncertainty, it's a problem that realistically needs to be solved with better tests and better signals, not by pretending not to have a prior.

This could take the form of finer-grained stereotypes. If someone says of me, "Taylor Saotome-Westlake? Oh, he's a man, you know what they're like," I would be offended—I mean, I would if I still believed that getting offended ever helps with anything. (It never helps.) I'm not like typical men, and I don't want to be confused with them. But if someone says, "Taylor Saotome-Westlake? Oh, he's one of those IQ 130, mid-to-low Conscientiousness and Agreeableness, high Openness, left-libertarian American Jewish atheist autogynephilic male computer programmers; you know what they're like," my response is to nod and say, "Yeah, pretty much." I'm not exactly like the others, but I don't mind being confused with them.

The other place where I think Murray is hiding the ball (even from himself) is in the section on "reconstructing a moral vocabulary for discussing human differences." (I agree that this is a very important project!) Murray writes—

I think at the root [of the reluctance to discuss immutable human differences] is the new upper class's conflation of intellectual ability and the professions it enables with human worth. Few admit it, of course. But the evolving zeitgeist of the new upper class has led to a misbegotten hierarchy whereby being a surgeon is better in some sense of human worth than being an insurance salesman, being an executive in a high-tech firm is better than being a housewife, and a neighborhood of people with advanced degrees is better than a neighborhood of high-school graduates. To put it so baldly makes it obvious how senseless it is. There shouldn't be any relationship between these things and human worth.

I take strong issue with Murray's specific examples here—as an incredibly bitter autodidact, I care not at all for formal school degrees, and as my fellow nobody pseudonymous blogger Harold Lee points out, many of those stuck in the technology rat race aspire to escape to a more domestic- and community-focused life not unlike that of a housewife. But after quibbling with the specific illustrations, I think I'm just going to bite the bullet here?

Yes, intellectual ability is a component of human worth! Maybe that's putting it baldly, but I think the alternative is obviously senseless. The fact that I have the ability and motivation to (for example, among many other things I do) write this cool science–philosophy blog about my delusional paraphilia where I do things like summarize and critique the new Charles Murray book, is a big part of what makes my life valuable—both to me, and to the people who interact with me. If I were to catch COVID-19 next month and lose 40 IQ points due to oxygen-deprivation-induced brain damage and not be able to write blog posts like this one anymore, that would be extremely terrible for me—it would make my life less-worth-living. (And this kind of judgment is reflected in health and economic policymaking in the form of quality-adjusted life years.) And my friends who love me, love me not as an irreplaceably-unique-but-otherwise-featureless atom of person-ness, but because my specific array of cognitive repertoires makes me a specific person who provides a specific kind of company. There can't be such a thing as literally unconditional love, because to love someone in particular, implicitly imposes a condition: you're only committed to love those configurations of matter that constitute an implementation of your beloved, rather than someone or something else.

Murray continues—

The conflation of intellectual ability with human worth helps to explain the new upper class's insistence that inequalities of intellectual ability must be the product of environmental disadvantage. Many people with high IQs really do feel sorry for people with low IQs. If the environment is to blame, then those unfortunates can be helped, and that makes people who want to help them feel good. If genes are to blame, it makes people who want to help them feel bad. People prefer feeling good to feeling bad, so they engage in confirmation bias when it comes to the evidence about the causes of human differences.

I agree with Murray that this kind of psychology explains a lot of the resistance to hereditarian explanations. But as long as we're accusing people of motivated reasoning, I think Murray's solution is engaging in a similar kind of denial, but just putting it in a different place. The idea that people are unequal in ways that matter is legitimately too horrifying to contemplate, so liberals deny the inequality, and conservatives deny that it matters. But I think if you really understand the fact–value distinction and see that the naturalistic fallacy is, in fact, a fallacy (and not even a tempting one), that the progress of humankind has consisted of using our wits to impose our will on an indifferent universe, then the very concept of "too horrifying to contemplate" becomes a grave error. The map is not the territory: contemplating doesn't make things worse; not-contemplating that which is already there can't make things better—and can blind you to opportunities to make things better.

Recently, Richard Dawkins spurred a lot of criticism on social media for pointing out that selective breeding would work on humans (that is, succeed at increasing the value of the traits selected for in subsequent generations), for the same reasons it works on domesticated nonhuman animals—while stressing, of course, that he deplores the idea: it's just that our moral commitments can't constrain the facts. Intellectuals with the reading-comprehension skill, including Murray, leapt to defend Dawkins and concur on both points—that eugenics would work, and that it would obviously be terribly immoral. And yet no one seems to bother explaining or arguing why it would be immoral. Yes, obviously murdering and sterilizing people is bad. But if the human race is to continue and people are going to have children anyway, those children are going to be born with some distribution of genotypes. There are probably going to be human decisions that do not involve murdering and sterilizing people that would affect that distribution—perhaps involving selection of in vitro fertilized embryos. If the distribution of genotypes were to change in a way that made the next generation grow up happier, and healthier, and smarter, that would be good for those children, and it wouldn't hurt anyone else! Life is not a zero-sum game! This is pretty obvious, really? But if no one except nobody pseudonymous bloggers can even say it, how are we to start the work?

The author of the Xenosystems blog mischievously posits five stages of knowledge of human biodiversity (in analogy to the famous, albeit reportedly lacking in empirical support, five-stage Kübler-Ross model of grief), culminating in Stage 4: Depression ("Who could possibly have imagined that reality was so evil?") and Stage 5: Acceptance ("Blank slate liberalism really has been a mountain of dishonest garbage, hasn't it? Guess it's time for it to die ...").

I think I got stuck halfway between Stage 4 and 5? It can simultaneously be the case that reality is evil, and that blank slate liberalism contains a mountain of dishonest garbage. That doesn't mean the whole thing is garbage. You can't brainwash a human with random bits; they need to be specific bits with something good in them. I would still be with the program, except that the current coordination equilibrium is really not working out for me. So it is with respect for the good works enabled by the anti-oppression Schelling point belief, that I set my sights on reorganizing at the other Schelling point of just tell the goddamned truth—not in spite of the consequences, but because of the consequences of what good people can do when we're fully informed. Each of us in her own way.


Peering Through Reverent Fingers

Any evolutionary advantage must come from a feature affecting our behavior. Thus, there is no evolutionary advantage to simply having a belief about our identity. Self-identity can matter and could have mattered only if it affects behavior, in which case it is really a process of self-identification. Moreover, it is not a matter of affirming a self-identity that we possess. For a belief that needs to be affirmed is not a belief at all.

—Joseph M. Whitmeyer, "How Evolutionary Psychology Can Contribute to Group Process Research", in The Oxford Handbook of Evolution, Biology, and Society

As an atheist, I'm not really a fan of religions, but I'll give them one thing: at least their packages of delusions are stable. The experience of losing your religion is a painful one, but once you've overcome the trauma of finding out that everything you believed was a lie, the process of figuring out how to live among the still-faithful now that you are no longer one of them, is something you only have to do once; it's not like everyone will have adopted a new Jesus Two while you were off having your crisis of faith. And the first Jesus was invisible anyway; you won't be able to pray sincerely, and that does set you apart from your—the—community, but your day-to-day life will be mostly unaffected.

The progressive Zeitgeist does not even offer this respite. Getting over psychological-sex-differences denialism was painful, but after many years of study and meditation, I think I've finally come to accept the horrible truth: women and men really are psychologically different. This sets me apart from the community, but not very much. The original lie wasn't invisible exactly, but it never caused too many problems, because it's easy to doublethink around. Most of the functional use of sex categories in Society is handled by seamless subconscious reference-classing, without anyone needing to consciously, verbally reason about sex differences: no one actually makes the same predictions or decisions about women and men—that would be crazy—but since you don't have direct introspective access to what computations your brain used to cough up a prediction or decision, you can just assume that you're treating everyone equally, and only rarely does the course of ordinary events force you to acknowledge or even notice the lie.

But in the decade I had my back turned reading science books, my former quasi-religion somehow came up with new lies: now, it's not enough to believe that women and men are mentally the same, you're also supposed to accept that those categories refer to some atomic mental property that can only be known by verbal self-report. But this actually breaks the mechanism that made the first lie so harmless: the shear stress of your prediction-and-decision classifier disagreeing with the punishment signals that the intelligent social web is using to train your pronoun-selection classifier throws the previously-backgrounded existence of the former into sharp relief. You really are expected to believe in Jesus Two! And it's far more ridiculous than the first one! I'm never going to get over this!


The Reverse Murray Rule

In the notes to his Real Education, Charles Murray proposes a convention for third-person singular pronouns where the sex of the referent is unknown or irrelevant—

As always, I adhere to the Murray Rule for dealing with third-person singular pronouns, which prescribes using the gender of the author or principal author as the default, and I hope in vain that others will adopt it.

The Murray Rule is a fine illustration of the use of conventions to break the symmetry between arbitrary choices: instead of having to flip a coin every time you want to talk about a hypothetical human in the third person, you pick a convention once, and let the convention pick the pronouns—and furthermore, Murray is proposing, you can use the sex of the author as an "input" to achieve determinism without the traditional sexism of the universal generic masculine or its distaff counterpart favored by some modern academics.

But even this still leaves us with one information-theoretic bit of freedom—one binary choice not yet determined, between the Murray Rule (female authors use the generic feminine; male authors use generic masculine) and the Reverse Murray Rule (female authors use generic masculine; male authors use generic feminine).

I'll concede that the Murray Rule is a more natural Schelling point on account of grouping "like with like": the generic hypothetical person's gender matching the author's seems to require less of a particular rationale than the other way around. But I much prefer the Reverse Murray Rule on æsthetic grounds. The implicit assumption that authors regard their own sex the normal, default case feels ... chauvinistic. And kind of gay. Women and men were made for each other. It is wrong to regard the opposite sex as some irrelevant alien, rather than an alternate self. That's why I tend to reach for the generic feminine when I'm being formal enough to eschew singular they, and the real reason I write "women and men" in that order. I like to imagine my hypothetical female analogue doing the opposite—or rather, doing the same thing—using male-first orderings and the generic masculine on the same verbalized rationale and analogous motivations in her own history ... even though she doesn't, can't exist.


Don't Read the Comments??

Historically, The Scintillating But Ultimately Untrue Thought has not provided a comment section. There were two reasons for this.

First, technical limitations, downstream of technical æsthetics. There are standard out-of-the-box blogging hosts—your WordPress, your Medium, &c.—that are easy for anyone to use, at the cost of taking control away from the user, locking access to your soul away on someone else's server, or, at best, obfuscated in some database behind opaque gobs of PHP. My real-name blog (started in December 2011, when I was much less technically adept) is still running WordPress, and I'm sad about it. In contrast, this blog is produced using the Pelican static site generator from Markdown text files, versioned in Git—simple tools I understand, producing flat HTML files that Nginx can serve. When I don't like something about my theme or my plugins, I'm not at the mercy of the developers; I can just fix it myself. The lack of a database meant forgoing a comment section, but that seemed like a small loss, because—

Second, internet comment sections are garbage and I don't want to be bothered to moderate one. I thought, people who are actually interested in replying to my writing can write a longform response on their own blog (please?—I'll link back), or on Reddit when I share to /r/TheMotte; and people who want to talk to me can find my email address (checked less often than my real-name email; I regret any delays) on the About page.

So I thought, and yet—first, the same do-it-myself æsthetics that make static-site generators attractive, make me cautiously open to the idea of a comment section that I can configure and host myself, rather than being held commercially hostage by the likes of Disqus. Second, perhaps some small consolation for never being a popular writer (I'm not prolific enough, and occupying too weird of a niche), is that maybe my readership is exclusive and discerning enough for the comments section to not be garbage.

So, as an experiment—no promises or warranties—I've set up an instance of the Isso commenting engine to host a comments section at the bottom of each indivdual post page.

Don't make me regret this.


Relative Gratitude and the Great Plague of 2020

In the depths of despair over not just having lost the Category War, but having lost it harder and at higher cost than I can even yet say (having not yet applied for clearance from the victors as to how much is my story to tell), I'm actually pretty impressed with how competently my filter bubble is handling the pandemic. When the stakes of getting the right answer for the right reasons, in public is measured in the hundreds of thousands of horrible suffocation deaths, you can see the discourse usefully move forward on the timescale of days.

In the simplest epidemiology models, the main parameter of interest is called R0, the basic reproduction number: the number of further infections caused by every new infection (at the start of the epidemic, when no one is yet immune). R0 isn't just a property of the disease itself, but also of the population's behavior. If R0 is above 1, the ranks of the infected grow exponentially; if R0 is less the 1, the outbreak peters out.

So first the narrative was "flatten the curve": until a vaccine is developed, we can't stop the virus, but with social distancing, frequent handwashing, not touching your face, &c. we can at least lower R0 to slow down the course of the epidemic, making the graph of curent infections at time t flatter and wider: if fewer people are sick at the same time, then the hospital system won't be overloaded, and fewer people will die.

The thing is, the various "flatten the curve" propaganda charts illustrating the idea didn't label their axes and depicted the "hospital system capacity" horizontal line above, or at most slightly below, the peak of the flattened curve, suggesting a scenario where mitigation efforts that merely slowed down the spread of the virus through the population would be enough to avoid disaster. Turns out, when you run the numbers, that's too optimistic: at the peak of a merely mitigated epidemic, there will be many times over more people who need intensive care, than ICU beds for them to get it. These cold equations suggest a more ambitious goal of "containment": lock everything down as hard as we need to in order to get R0 below 1, and scurry to get enough testing, contract-tracing, and quarantining infrastructure in place to support gradually restarting the economy without restarting the outbreak.

The discussion goes on (is it feasible to callibrate the response that finely?—what of the economic cost? &c.)—and that's what impresses me; that's what I'm grateful for. The discussion goes on. Sure, there's lots of the usual innumeracy, cognitive biases, and sheer wishful thinking, but when there's no strategic advantage to "playing dumb"—there's no pro-virus coalition that might gain an advantage if we admit out loud that they said something true—you can see people actually engage each other with the full beauty of our weapons, and, sometimes, change their mind in response to new information. The "flatten the curve" argument isn't "false" exactly (quantitatively slowing down the outbreak will, in fact, quantitatively make the overload on hospitals less bad), but the pretty charts portraying the flattened curve safely below the hospital capacity line were substantively misleading, and it was possible for someone to spend a bounded and small amount of effort to explain, "Hey, this is substantively misleading because ..." and be heard, to the extent that the people who made one of the most popular "flatten the curve" charts published an updated version reflecting the new argument.

This level of performance is ... not to be taken for granted. Take it from me.


Cloud Vision

Google reportedly recently sent out an email to their Cloud Vision API customers, notifying them that the service will stop returning "woman" or "man" labels for people in photos. Being charitable (as one does), I can think of reasons why I might defend or support such a decision. Detecting the sex of humans in images is going to significantly less reliable than just picking out the humans in the photo, and the way the machines do sex-classification is going to depend on their training corpus, which might contain embedded cultural prejudices that Google might not want to inadvertently use their technological hegemony to reproduce and amplify. Just using a "person" label dodges the whole problem.

I think of my experience playing with FaceApp, the uniquely best piece of software in the world, which lets the user apply neural-network-powered transformations to their photos to see how their opposite-sex analogue would look! (Okay, the software actually has lots of other transformations and filters available—aging, de-aging, add makeup, add beard, lens flare, &c.—but I'm assuming those are just there for plausible deniability.) So, for example, the "Female" transformation hallucinates long hair—but hair length isn't sexually dimorphmic the way facial morphology is! At most, the "females have long hair" convention has a large basin of attraction—but the corpus of training photos were taken from a culture following that convention. Is it OK for the AI's concept of womanhood itself to reflect that? There are all sorts of deep and subtle ethical questions about "algorithmic fairness" that could be asked here!

I don't think the deep and subtle questions are being asked. The reigning ideology does not permit itself the expressive power to formulate the deep and subtle questions. "Given that a person's gender cannot be inferred by appearance," reads the email. Cannot be inferred, says Google! This is either insane, or a blatant lie told to appease the insane. Neither bodes well for the future of my civilization. (Contrast to sane versions of the concern, like, "Cannot be inferred with sufficiently high reliability", or, "Can be inferred in most cases, but we're concerned about the social implications of misclassifying edge cases.") I'm used to this shit from support groups at the queer center in Berkeley or in Portland, but I never really took it seriously—never really believed that it could be taken seriously. But Google! Aren't those guys supposed to know math?

Just ... this fucking ideology that assumes everyone has this "gender" thing that's incredibly important for everyone to respect and honor, but otherwise has no particular properties whatsoever. I can sketch out an argument for why, in theory, the ideology is memetically fit: there are at least two (and probably three or four) clusters of motivations for why some humans want to change sex; liberal-individualist Society wants to accomodate them and progressives want to use them as a designated-victim pity-pump, but the inadequacy of the existing continuum of interventions, and perhaps more so the continuity of the menu of available interventions, is such that verbal self-identification ends up being the only stable Schelling point.

But the theory doesn't help me wrap my head about how grown-ups actually believe this shit. Or at least, are too scared to be caught dead admitting out loud that they don't. This is Cultural Revolution shit! This is Lysenko-tier mindfuckery up in here!

And I don't know how to convey, to anyone who doesn't already feel it too, that I'm scared—and that I have a reason to be scared.

I believe that knowledge is useful, and that there are general algorithms—patterns of thinking and talking—that produce knowledge. You can't just get one thing wrong—every wrong answer comes from a bug in your process, and there's an infinite family of other inputs that could trigger the same bug. The calculator that says 6 + 7 = 14 isn't just going to mislead you if you use it to predict what happens when you combine a stack of ●●●●●● pennies and a stack of ●●●●●●● pennies—it's not a calculator. The function-that-it-computes is not arithmetic.

I am not particularly intelligent man. If I ever seem to be saying true and important things that almost no one else is saying, it's not because I'm unusually insightful, but because I'm unusually bad at keeping secrets. There are ... operators among us, savvy Straussian motherfuckers who know and see everything I can, and more—but who think it doesn't matter that not everybody knows.

And I guess ... I think it matters? One of the evilest reactionary bloggers mentioned the difference between a state religion that requires you to believe in the unseen, and one that requires you to disbelieve in what is seen. My thesis is that a state religion that requires you to fluidly doublethink around the implications of "Some women have penises", will also falter over something even the Straussians have to protect. But I can't prove it.

The COVID-19 news is playing hell with my neuroticism. They say you should stock up on needed prescription drugs, in case of supply-chain disruptions. I guess I'm glad that, unlike some of my friends who I am otherwise jealous of, I'm not dependent on drugs for the hormones that my body needs in order for my bones to not rot. I wish I had known tweleve years ago, that accepting that dependency in exchange for its scintillating benefits was an option for cases like mine. There's at least a consistency in this: it's not safe to depend on the supply lines of a system that didn't have the all-around competency to just tell me.

Anyway, besides the Total Culture War over the future of my neurotype tearing apart ten-year friendships and having me plotting to flee my hometown, my life is going pretty okay. I'm getting paid lots of money to sell insurance in Canada, and I have lots of things to look forward to, like the conclusion to the Tangled sequel series, or the conclusion to the Obnoxious Bad Decision Child sequel miniseries, or finishing my forthcoming review of the new Charles Murray book. (It's going to be great—a bid to broaden the topic scope of the blog to "things that only right-wing Bad Guys want to talk about, but without myself being a right-wing Bad Guy" in full generality, not just for autogynephila and the correspondence of language to reality.)

Basically, I want to live. I know that now. And it's hard to shake the feeling that the forces trying to cloud my vision don't want me to.