From a96d83a4eade04c7487b6823bcc321bd0989f0ab Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Tue, 12 Sep 2023 18:21:46 -0700 Subject: [PATCH] poke at "Fake Deeply" I need to think more deeply about the topology of policies thing (and re-read Christiano's comment) before I can write convincingly about it. --- content/drafts/fake-deeply.md | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/content/drafts/fake-deeply.md b/content/drafts/fake-deeply.md index 872f6c5..3530ff8 100644 --- a/content/drafts/fake-deeply.md +++ b/content/drafts/fake-deeply.md @@ -8,7 +8,7 @@ Status: draft Jake Morgan still couldn't help but marvel at what he and his team had built. It really looked and sounded just like her! -It had been obvious since DALL-E back in 'twenty-one—earlier if you were paying attention—that generative AI would reach this level of customization and realism before too long. Eventually, it was just a matter of the right couple dozen people rolling up their sleeves—and Magma's willingness to pony up the compute—to make it work. But _it worked_. His awe at Multigen's sheer power would have been humbling, if not for the awareness of his own modest role in bringing it into being. +It had been obvious since DALL-E back in 'twenty-one—earlier if you were paying attention—that generative AI would reach this level of customization and realism before too long. Eventually, it was just a matter of the right couple dozen people rolling up their sleeves—and Magma's willingness to pony up the compute—to make it work. But _it worked_. His awe at Multigen's sheer power would have been humbling, if not for the awareness of his own role in bringing it into being. Of course, this particular video wouldn't be showcased in the team's next publication. Technically, Magma employees were not supposed to use their cutting-edge generative AI system to make custom pornography of their coworkers. Technically (what was probably a lesser offense) Magma employees were not supposed to be viewing such content during work hours. Technically—what should have been a greater offense—Magma employees were not supposed to covertly introduce a bug into the generative AI service codebase specifically in order to make it possible to create such content without leaving a log. @@ -110,15 +110,23 @@ He had only made a couple dozen videos, but the work of covering it up would be Chloë would be left with the unsolvable mystery of what her digital poltergeist wanted to do with puppy videos, but Jake was fine with that. (Better than trying to convince her that the AI wanted nudes of female Magma employees.) When she came back to him next week, he would just need to play it cool and answer her questions about the system. -Or maybe—he could read some Yuddite literature over the weekend, feign a sincere interest in "AI safety", try to get on her good side? Jake had trouble believing any sane person could really think that Magma's machine learning models were plotting something. This cult victim had ridden a wave of popular hysteria into a sinecure. If he played nice and validated her belief system in the most general terms, maybe that would be enough to make her feel useful and therefore not need to chase shadows in order to justify her position. +Or maybe—he could read some Yuddite literature over the weekend, feign a sincere interest in "AI safety", try to get on her good side? Jake had trouble believing any sane person could really think that Magma's machine learning models were plotting something. This cult victim had ridden a wave of popular hysteria into a sinecure. If he played nice and validated her belief system in the most general terms, maybe that would be enough to make her feel useful and therefore not need to bother chasing shadows in order to justify her position. She would lose interest and the whole investigation would blow over. ------ +"And so just because an AI seems to behaving well, doesn't mean it's aligned," Chloë explained. "It might just be playing along as an instrumental strategy, hoping to pull off a treacherous turn later." + +"So then we're just screwed, right?" said Jake in the tone of an attentive student. After fixing the logging regex and overwriting the evidence with puppies, Jake had spent the weekend catching up with the AI safety literature. Some of it had been better than he expected. Just because Chloë was nuts didn't mean her co-ideologues didn't have any valid points to make about future systems. + +"I mean, probably," said Chloë. She was beaming. Jake's plan to distract her from her investigation by asking her to bring him up to speed on AI safety seemed to be working perfectly. + +"But not necessarily," she continued. There are a few avenues of hope—at least in the not-wildly-superhuman regime. One of them has to do with the topology of policies and the fragility of deception." + +"The thing about deception is, you can't just lie about the one thing. Everything is connected to each other in the Great Web of Causality: you also have to lie to cover up the evidence, and recursively cover up the coverups. A robot that killed your cat but wants your approval can't just say 'I didn't do it.' It needs to fabricate evidence that something else killed the cat, or .... arrange for a series of holograms to make it look to you like the cat is still alive." + [TODO— - * Chloë is explaining deceptive alignment. If a model does well on our evals, how do we know whether it's actually doing the right thing, or just trying to fool us? - * Jake had explicitly asked to get brought up to speed on AI safety, to stall on whatever uncomfortable audit questions she might have—the puppies are prepared, but it still seemed like a good idea - * "So then we're just doomed then, right?" Jake is trying to be agreeable and flattering. He's fixed the regex and overwritten his porn with puppies, and spent the weekend reading AI safety papers and blog posts. Some of it was honestly better than he expected. This Chloë being insane didn't invalidate the whole field as having serious points to make. * "Maybe not." There are two ways to pass all the evals: do things the right way, or be pervasively deceptive. The thing is, policies are trained continuously via gradient descent. The purely honest policy and the purely deceptive policy look identical on evals, but in between, the model would have to learn how to lie, and lie about lying, and cover-up coverups. (Chloë lapses into Yuddite speak about the "Great Web of Causality.") Could we somehow steer into the honest attractor? + * https://www.greaterwrong.com/posts/AqsjZwxHNqH64C2b6/let-s-see-you-write-that-corrigibility-tag/comment/8kPhqBc69HtmZj6XR * That's why she believes in risk paranoia. If situational awareness is likely to emerge at some point, she doesn't want to rule it out now. AI is real; it's not just a far-mode in-the-future thing. * Jake sees the uncomfortable analogy to his own situation. He tries to think of what other clue he might have left, while the conversation continues ... * The Last-Modified dates! They're set by the system, the API doesn't offer a way to backdate them. -- 2.17.1