-"Imagine you're training your AI to act as a general home assistant, to run everything in a user's household in a way that the user rates highly," Chloë continued. "If—I don't know, say, the family cat dies, that's a negative reward—maybe a better feeding schedule or security monitoring could have prevented it. But if the cat dies and the system _tries to cover it up_—says the cat is out for a walk right now to avoid telling you the bad news—that's going to be an even larger negative reward when the deception is discovered."
+"Imagine you're training your AI to act as a general home assistant, to run everything in a user's household in a way that the user rates highly," Chloë continued. "If—I don't know, say, the family cat dies, that's a negative reward—maybe a better feeding schedule or security monitoring could have prevented it. But if the cat dies and the system _tries to cover it up_—says the cat is out for a walk right now to avoid telling you the bad news—that's going to be an even larger negative reward when the deception is discovered.
+
+"Uh _huh_," Jake said, more unhappily. It turned out that versioning _was_ on for the bucket. (Why? But probably whoever's job it was to set up the bucket had instead asked, Why not?) A basic `GET` request for the file name would return puppies, but any previous revisions were still available for anyone who thought to query them.
+
+"If the system is trained to pass rigorous evaluations, a deceptive policy has to do a lot more work, different work, to pass the evaluations," Chloë said. "In the limit, that could mean—using Multigen-like capabilities to make videos to convince you that the cat is there while you're on vacation? Constructing a realistic cat-like robot to fool you when you get back? Maybe this isn't the best illustrative example. The point is, small, 'shallow' deceptions aren't stable. The set of policies that do well on evaluations comes in two disconnected parts: the part that tells the truth, and the part that—not just lies, but, um—"
+
+Jake's attentive student persona finished the thought. "But spins up an entire false reality, as intricate as it needs to be to protect itself. If you're going to fake anything, you need to fake deeply."
+
+"Exactly, you get it!" Chloë was elated. "You know, when I called you last week, I was worried you thought I was nuts. But you see the value of constant vigilance now, right?—why we need to investigate and debug things like this until we understand what's going on, instead of just shrugging that neural nets do weird things sometimes. If the landscape of policies looks something like what I've described, catching the precursors to deception early could be critical—to raise the height of the loss landscape between the honest and deceptive policies, before frontier AI development plunges into the latter. To get good enough at catching lies, for honesty to be the best policy."
+
+"Yeah," Jake said. "I get it."