From a8c7f421ae32d71c22cebaa36911ac258cda6e3b Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 22 Jan 2017 23:34:11 -0800 Subject: [PATCH] drafting "From What I've Tasted of Desire" --- .../drafts/from-what-ive-tasted-of-desire.md | 24 +++++++++++++++---- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/content/drafts/from-what-ive-tasted-of-desire.md b/content/drafts/from-what-ive-tasted-of-desire.md index d203dea..f4631d8 100644 --- a/content/drafts/from-what-ive-tasted-of-desire.md +++ b/content/drafts/from-what-ive-tasted-of-desire.md @@ -1,15 +1,29 @@ Title: From What I've Tasted of Desire Date: 2020-01-01 -Category: other -Tags: cathartic +Category: commentary Status: draft _(Epistemic status: far more plausible than it has any right to be.)_ -So, not a lot of people understand this, +So, not a lot of people understand this, but the end of the world is, in fact, nigh. _Conditional_ on civilization not collapsing (which is itself a _kind_ of end of the world), sometime in the next century or so, someone is going to invent better-than-human artificial general intelligence. And from that point on, humans are not really in control of what happens in this planet's future light cone. -_Conditional_ on civilization not collapsing +This is a counterintuitive point. It's tempting to think that you could program the AI to just obey orders ("Write an adventure novel for my daughter's birthday", "Output the design of a nanofactory") and not otherwise intervene in (or take over) the universe. And maybe [something like that](https://arbital.com/p/genie/) could be made to work, but it's _much_ harder than it looks. + +Our simple framework for benchmarking how intelligence has to work is _expected utility maximization_: model the world, use your model to compute a probability distribution over outcomes conditional on choosing to perform an action for some set of actions, and then perform the action with the highest expected utility with respect to your utility function (a mapping from outcomes to ℝ). Any agent that behaves in a way that can't be shoved into this framework is in violation of the [von Neumann–Morgenstern axioms](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), which look so "reasonable" that [we expect any "reasonable" agent to self-modify](https://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/) to be in harmony with them. + +So as AIs get more and more general, more like agents capable of autonomously solving new problems rather than unusually clever-looking ordinary computer programs, we should expect them to look more and more like expected utility maximizers, optimizing the universe with respect to some internal value criterion. + +But humans are [a mess of conflicting desires](http://lesswrong.com/lw/l3/thou_art_godshatter/) inherited from our evolutionary and sociocultural history; we don't _have_ a utility function written down anywhere that we can just put in the AI. So if the systems that ultimately run the world end up with a utility function that's _not_ in the incredibly specific class of those we would have wanted if we knew how to translate everything humans want or would-want into a utility function, then the machines disassemble us for spare atoms and tile the universe with _something else_. There's no _reason_ for them to protect human life or forms of life that we would find valuable unless we specifically _code that in_. + +This looks like a hard problem. This looks like a _really_ hard problem with _unimaginably_ high stakes: once the handoff of control of our civilization from humans to machines happens, we don't get a second chance to do it over. The ultimate fate of the human species rests on the epistemic competence of the AI research community: the strength to _get the right answer_ and bet the world on it, rather than clinging to one's pet hypothesis to the end, leaving science to advance funeral by funeral. + +The prevalence of autogynephilic trans women + + + +https://medium.com/incerto/the-most-intolerant-wins-the-dictatorship-of-the-small-minority-3f1f83ce4e15 + +It's unclear how much of this is just a selection effect -sometime in the next century or so, someone is going to invent greater-than-human artificial general intelligence. We may be living in a scenario where the world is _literally destroyed specifically because no one wants to talk about their masturbation fantasies_. -- 2.17.1