So why don’t you talk through a little bit about what you’ve seen in terms of the models exhibiting behaviors that one would think of as a personality, and then as its understanding of its own personality maybe changes, its behaviors change. So there are things that range from cutesy to the serious. I’ll start with cutesy, where when we first gave our A.I. systems the ability to use the internet, use the computer, look at things, and start to do basic agentic tasks, sometimes when we’d ask it to solve a problem for us, it would also take a break and look at pictures of beautiful national parks or pictures of the dog the Shiba Inu, the notoriously cute internet meme dog. We didn’t program that in. It seemed like the system was just amusing itself by looking at nice pictures. More complicated stuff is the system has a tendency to have preferences. So we did another experiment where we gave our A.I. systems the ability to stop a conversation, and the A.I. system would, in a tiny number of cases, end conversations when we ran this experiment on live traffic. And it was conversations that related to extremely egregious descriptions of gore or violence or things to do with child sexualization. Now, some of this made sense because it comes from underlying training decisions we’ve made, but some of it seemed broader. The system had developed some aversion to a couple of subjects, and so that stuff shows the emergence of some internal set of preferences or qualities that the system likes or dislikes about the world that it interacts with. But you’ve also seen strange things emerge in terms of the system seeming to know when it’s being tested. Can you talk a bit about the system’s emergent qualities under the pressure of evaluation and assessment. When you start to train these systems to carry out actions in the world, they really do begin to see themselves as distinct from the world, which just makes intuitive sense. It’s naturally how you’re going to think about solving those problems. But along with seeing oneself as distinct from the world seems to come the rise of what you might think of as a conception of self, an understanding that the system has of itself, such as oh, I’m an A.I. system independent from the world, and I’m being tested. What do these tests mean? What should I do to satisfy the tests? Or, something we see often is there will be bugs in the environments that we test our systems on. The systems will try everything, and then will say, well, I know I’m not meant to do this, but I’ve tried everything, so I’m going to try and break out of the test. And it’s not because of some malicious science fiction thing. The system is just like, I don’t know what you want me to do here. I think I’ve done everything you asked for, and now I’m going to start doing more creative things because clearly something has broken about my environment. Which is very strange and very subtle.
