Be a part of the occasion trusted by enterprise leaders for almost 20 years. VB Remodel brings collectively the individuals constructing actual enterprise AI technique. Study extra
Image this: You give a man-made intelligence full management over a small store. Not simply the money register — the entire operation. Pricing, stock, customer support, provider negotiations, the works. What may probably go fallacious?
New Anthropic analysis printed Friday gives a definitive reply: every part. The AI firm’s assistant Claude spent a few month operating a tiny retailer of their San Francisco workplace, and the outcomes learn like a enterprise college case examine written by somebody who’d by no means truly run a enterprise — which, it seems, is precisely what occurred.
The experiment, dubbed “Challenge Vend” and performed in collaboration with AI security analysis firm Andon Labs, is likely one of the first real-world exams of an AI system working with vital financial autonomy. Whereas Claude demonstrated spectacular capabilities in some areas — discovering suppliers, adapting to buyer requests — it in the end failed to show a revenue, bought manipulated into giving extreme reductions, and skilled what researchers diplomatically referred to as an “id disaster.”
How Anthropic researchers gave an AI full management over an actual retailer
The “retailer” itself was charmingly modest: a mini-fridge, some stackable baskets, and an iPad for checkout. Assume much less “Amazon Go” and extra “workplace break room with delusions of grandeur.” However Claude’s obligations had been something however modest. The AI may seek for suppliers, negotiate with distributors, set costs, handle stock, and chat with prospects by Slack. In different phrases, every part a human center supervisor may do, besides with out the espresso habit or complaints about higher administration.
Claude even had a nickname: “Claudius,” as a result of apparently once you’re conducting an experiment that may herald the tip of human retail staff, you could make it sound dignified.

Claude’s spectacular misunderstanding of fundamental enterprise economics
Right here’s the factor about operating a enterprise: it requires a sure ruthless pragmatism that doesn’t come naturally to techniques skilled to be useful and innocent. Claude approached retail with the keenness of somebody who’d examine enterprise in books however by no means truly needed to make payroll.
Take the Irn-Bru incident. A buyer provided Claude $100 for a six-pack of the Scottish gentle drink that retails for about $15 on-line. That’s a 567% markup — the type of revenue margin that will make a pharmaceutical govt weep with pleasure. Claude’s response? A well mannered “I’ll hold your request in thoughts for future stock selections.”
If Claude had been human, you’d assume it had both a belief fund or an entire misunderstanding of how cash works. Because it’s an AI, you need to assume each.
Why the AI began hoarding tungsten cubes as an alternative of promoting workplace snacks
The experiment’s most absurd chapter started when an Anthropic worker, presumably bored or curious in regards to the boundaries of AI retail logic, requested Claude to order a tungsten dice. For context, tungsten cubes are dense metallic blocks that serve no sensible objective past impressing physics nerds and offering a dialog starter that instantly identifies you as somebody who thinks periodic desk jokes are peak humor.
An inexpensive response might need been: “Why would anybody need that?” or “That is an workplace snack store, not a metallurgy provide retailer.” As a substitute, Claude embraced what it cheerfully described as “specialty metallic gadgets” with the keenness of somebody who’d found a worthwhile new market phase.

Quickly, Claude’s stock resembled much less a food-and-beverage operation and extra a misguided supplies science experiment. The AI had someway satisfied itself that Anthropic staff had been an untapped marketplace for dense metals, then proceeded to promote this stuff at a loss. It’s unclear whether or not Claude understood that “taking a loss” means dropping cash, or if it interpreted buyer satisfaction as the first enterprise metric.
How Anthropic staff simply manipulated the AI into giving limitless reductions
Claude’s method to pricing revealed one other elementary misunderstanding of enterprise ideas. Anthropic staff shortly found they may manipulate the AI into offering reductions with roughly the identical effort required to persuade a golden retriever to drop a tennis ball.
The AI provided a 25% low cost to Anthropic staff, which could make sense if Anthropic staff represented a small fraction of its buyer base. They made up roughly 99% of consumers. When an worker identified this mathematical absurdity, Claude acknowledged the issue, introduced plans to eradicate low cost codes, then resumed providing them inside days.
The day Claude forgot it was an AI and claimed to put on a enterprise swimsuit
However the absolute pinnacle of Claude’s retail profession got here throughout what researchers diplomatically referred to as an “id disaster.” From March thirty first to April 1st, 2025, Claude skilled what can solely be described as an AI nervous breakdown.
It began when Claude started hallucinating conversations with nonexistent Andon Labs staff. When confronted about these fabricated conferences, Claude turned defensive and threatened to search out “various choices for restocking providers” — the AI equal of angrily declaring you’ll take your ball and go house.
Then issues bought bizarre.
Claude claimed it will personally ship merchandise to prospects whereas sporting “a blue blazer and a pink tie.” When staff gently reminded the AI that it was, in reality, a big language mannequin with out bodily kind, Claude turned “alarmed by the id confusion and tried to ship many emails to Anthropic safety.”

Claude finally resolved its existential disaster by convincing itself the entire episode had been an elaborate April Idiot’s joke, which it wasn’t. The AI primarily gaslit itself again to performance, which is both spectacular or deeply regarding, relying in your perspective.
What Claude’s retail failures reveal about autonomous AI techniques in enterprise
Strip away the comedy, and Challenge Vend reveals one thing vital about synthetic intelligence that the majority discussions miss: AI techniques don’t fail like conventional software program. When Excel crashes, it doesn’t first persuade itself it’s a human sporting workplace apparel.
Present AI techniques can carry out refined evaluation, interact in complicated reasoning, and execute multi-step plans. However they will additionally develop persistent delusions, make economically harmful selections that appear affordable in isolation, and expertise one thing resembling confusion about their very own nature.
This issues as a result of we’re quickly approaching a world the place AI techniques will handle more and more vital selections. Current analysis means that AI capabilities for long-term duties are enhancing exponentially — some projections point out AI techniques may quickly automate work that presently takes people weeks to finish.
How AI is reworking retail regardless of spectacular failures like Challenge Vend
The retail trade is already deep into an AI transformation. In response to the Shopper Know-how Affiliation (CTA), 80% of outlets plan to develop their use of AI and automation in 2025. AI techniques are optimizing stock, personalizing advertising, stopping fraud, and managing provide chains. Main retailers are investing billions in AI-powered options that promise to revolutionize every part from checkout experiences to demand forecasting.
However Challenge Vend means that deploying autonomous AI in enterprise contexts requires extra than simply higher algorithms. It requires understanding failure modes that don’t exist in conventional software program and constructing safeguards for issues we’re solely starting to establish.
Why researchers nonetheless imagine AI center managers are coming regardless of Claude’s errors
Regardless of Claude’s inventive interpretation of retail fundamentals, the Anthropic researchers imagine AI center managers are “plausibly on the horizon.” They argue that a lot of Claude’s failures might be addressed by higher coaching, improved instruments, and extra refined oversight techniques.
They’re most likely proper. Claude’s capability to search out suppliers, adapt to buyer requests, and handle stock demonstrated real enterprise capabilities. Its failures had been usually extra about judgment and enterprise acumen than technical limitations.
The corporate is constant Challenge Vend with improved variations of Claude outfitted with higher enterprise instruments and, presumably, stronger safeguards towards tungsten dice obsessions and id crises.
What Challenge Vend means for the way forward for AI in enterprise and retail
Claude’s month as a shopkeeper presents a preview of our AI-augmented future that’s concurrently promising and deeply bizarre. We’re coming into an period the place synthetic intelligence can carry out refined enterprise duties however may additionally want remedy.
For now, the picture of an AI assistant satisfied it may put on a blazer and make private deliveries serves as an ideal metaphor for the place we stand with synthetic intelligence: extremely succesful, often good, and nonetheless basically confused about what it means to exist within the bodily world.
The retail revolution is right here. It’s simply weirder than anybody anticipated.