X-Git-Url: http://unremediatedgender.space/source?a=blobdiff_plain;f=content%2Fdrafts%2Fzevis-choice.md;h=aeea658396507bac0c1558e77256993904222ce7;hb=b2ddcbd0a600e750881d64716c03e7ee1aca70b0;hp=02b810996685bd90fbf6260921701ef4ee96e4c1;hpb=9946cee63996271caeaa5903492302cd4a175d0d;p=Ultimately_Untrue_Thought.git diff --git a/content/drafts/zevis-choice.md b/content/drafts/zevis-choice.md index 02b8109..aeea658 100644 --- a/content/drafts/zevis-choice.md +++ b/content/drafts/zevis-choice.md @@ -49,7 +49,7 @@ But pushing on embryo selection only makes sense as an intervention for optimizi But if you think the only hope for there _being_ a future flows through maintaining influence over what large tech companies are doing as they build transformative AI, declining to contradict the state religion makes more sense—if you don't have _time_ to win a culture war, because you need to grab hold of the Singularity (or perform a [pivotal act](https://arbital.com/p/pivotal/) to prevent it) _now_. If the progressive machine marks you as a transphobic bigot, the machine's functionaries at OpenAI or Meta AI Research are less likely to listen to you when you explain why [their safety plan](https://openai.com/blog/our-approach-to-alignment-research/) won't work, or why they should have a safety plan at all. -(I remarked to "Wilhelm" in mid-2022 that DeepMind [changing its Twitter avatar to a rainbow variant of their logo for Pride month](https://web.archive.org/web/20220607123748/https://twitter.com/DeepMind) was a bad sign.) +(I remarked to "Thomas" in mid-2022 that DeepMind [changing its Twitter avatar to a rainbow variant of their logo for Pride month](https://web.archive.org/web/20220607123748/https://twitter.com/DeepMind) was a bad sign.) So isn't there a story here where I'm the villain, willfully damaging humanity's chances of survival by picking unimportant culture-war fights in the xrisk-reduction social sphere, when _I know_ that the sphere needs to keep its nose clean in the eyes of the progressive egregore? _That's_ why Yudkowsky said the arguably-technically-misleading things he said about my Something to Protect: he _had_ to, to keep our collective nose clean. The people paying attention to contemporary politics don't know what I know, and can't usefully be told. Isn't it better for humanity if my meager talents are allocated to making AI go well? Don't I have a responsibility to fall in line and take one for the team—if the world is at stake?