I have found it myself hard to look at this and look at what people are doing and look at them bragging on different social media platforms about the number of agents they now have running on their behalf, and telling the difference between people enjoying the feeling of screwing around with a new technology and some actually transformative expansion and capabilities that the people now have. So maybe to ground this a little bit, I mean, you just talked about a kind of fun side project in your species simulator, either in Anthropic or more broadly, what are people doing with these systems that seems actually useful? So this morning, a colleague of mine said, hey, I want to take a piece of technology we have called Claude Interviewer, which is a system where we can get Claude to interview people, and we use it for a range of social science bits of research. He wants to extend it in some way that involves touching another part of Anthropic’s infrastructure. He slacked a colleague who owns that bit of infrastructure and said, hey, I want to do this thing. Let’s meet tomorrow. And the guy said, “Absolutely. Here are the five software packages you should have Claude read before our meeting and summarize for you.” And I think that’s a really good illustration where this gnarly engineering project, which would previously have taken a lot longer and many people, is now going to mostly be done by two people agreeing on the goal and having their Claudes read some documentation and agree on how to implement the thing. Another example is a colleague recently wrote a post about how they’re working using agents, and it looks almost like an idealized life that many of us might want, where it’s like, I wake up in the morning, I think about the research that I want. I tell five different Claudes to do it, then I go for a run, then I come back from a run and I look at the results, and then I ask two other Claudes to study the results, figure out which direction is best and do that. Then I go for a walk and then I come back. And it just looks like this really fun existence where they have completely upended how work works for them. And they’re both much more effective. But also they’re now spending most of their time on the actual hard part, which is figuring out what do we use our human agency to do. And they’re working really hard to figure out anything that isn’t the special kind of genius and creativity of being a person. How do I get the A.I. system to do it for me? Because it probably can if I ask it the right way. Are they much more effective? I mean this very seriously. One of my biggest concerns about where we’re going here is that people have a, I think, mistaken theory of the human mind that operates for many of us, as if — I always call it the matrix theory of the human mind. Everybody wants the little port in the back of your head that you just download information into. My experience being a reporter and doing the show for a long time is that human creativity and thinking and ideas is inextricably bound up in the labor of learning, the writing of first drafts. So when I hear — right, I have producers on the show and I could say to my producers before an interview with Jack Clark or an interview with someone else, go read all the stuff, go read the books, give me your report, then I’ll walk into the room having read the report. I don’t find that works. I need to do all that reading too. And then we talk about it and we’re passing it back and forth. I worry that what we’re doing is a quite profound offloading of tasks that are laborious. It makes us feel very productive to be presented with eight research reports after our morning run. But actually what would be productive is doing the research. There’s obviously some balance. I do have producers and people and companies do have employees. But how do you know people are getting more productive versus they have sent computers off on a huge amount of busywork, and they are now the bottleneck. And what they’re now going to spend all their time doing is absorbing B-plus-level reports from an A.I. system, as opposed to — that kind of shortcuts the actual thinking and learning process that leads to real creativity. I turn this back and say I think most people, at least this has been my experience, can do about two to four hours of genuinely useful creative work a day. And after that, you’re, in my experience, you’re trying to do all the turn-your-brain-off schlep work that surrounds that work. Now, I’ve found that I can just be spending those two to four hours a day on the actual creative hard work. And if I’ve got any of this schlep work, I increasingly delegate it to A.I. systems. It does, though, mean that we are going to be in a very dangerous situation as a species, where some people have the luxury of having time to spend on developing their skills, or the personality, inclination or job that forces them to. Other people might just fall into being entertained and passively consuming this stuff, and having this junk-food work experience where it looks to the outside like you’re being very productive, but you’re not learning. And I think that’s going to require us to have to change not just how education works, but how work works, and develop some real strategies for making sure people are actually exercising their mind with this stuff.
