Two months after Nvidia and OpenAI unveiled their eye-popping plan to deploy a minimum of 10 gigawatts of Nvidia methods—and as much as $100 billion in investments—the chipmaker now admits the deal isn’t really last.
Talking Tuesday on the UBS World Expertise and AI Convention in Scottsdale, Nvidia EVP and CFO Colette Kress informed traders that the much-hyped OpenAI partnership continues to be on the letter-of-intent stage.
“We nonetheless haven’t accomplished a definitive settlement,” Kress stated when requested how a lot of the 10-gigawatt dedication is definitely locked in.
That’s a putting clarification for a deal that Nvidia CEO Jensen Huang as soon as known as “the largest AI infrastructure mission in historical past.” Analysts had estimated that the deal might generate as a lot as $500 billion in income for the AI chipmaker.
When the businesses introduced the partnership in September, they outlined a plan to deploy tens of millions of Nvidia GPUs over a number of years, backed by as much as 10 gigawatts of information middle capability. Nvidia pledged to take a position as much as $100 billion in OpenAI as every tranche comes on-line. The information helped gasoline an AI-infrastructure rally, sending Nvidia shares up 4% and reinforcing the narrative that the 2 firms are joined on the hip.
Kress’s feedback counsel one thing extra tentative, even months after the framework was launched.
A megadeal that isn’t within the numbers—but
It’s unclear why the deal hasn’t been executed, however Nvidia’s newest 10-Q provides clues. The submitting states plainly that “there isn’t any assurance that any funding will probably be accomplished on anticipated phrases, if in any respect,” referring not solely to the OpenAI association but additionally to Nvidia’s deliberate $10 billion funding in Anthropic and its $5 billion dedication to Intel.
In a prolonged “Danger Elements” part, Nvidia spells out the delicate structure underpinning megadeals like this one. The corporate stresses that the story is just as actual because the world’s capacity to construct and energy the information facilities required to run its methods. Nvidia should order GPUs, HBM reminiscence, networking gear, and different elements greater than a 12 months upfront, typically through non-cancelable, pay as you go contracts. If prospects cut back, delay financing, or change course, Nvidia warns it might find yourself with “extra stock,” “cancellation penalties,” or “stock provisions or impairments.” Previous mismatches between provide and demand have “considerably harmed our monetary outcomes,” the submitting notes.
The largest swing issue appears to be the bodily world: Nvidia says the supply of “knowledge middle capability, power, and capital” is essential for purchasers to deploy the AI methods they’ve verbally dedicated to. Energy build-out is described as a “multiyear course of” that faces “regulatory, technical, and building challenges.” If prospects can’t safe sufficient electrical energy or financing, Nvidia warns, it might “delay buyer deployments or cut back the size” of AI adoption.
Nvidia additionally admits that its personal tempo of innovation makes planning more durable. It has moved to an annual cadence of recent architectures—Hopper, Blackwell, Vera Rubin—whereas nonetheless supporting prior generations. It notes {that a} sooner structure tempo “might amplify the challenges” of predicting demand and may result in “lowered demand for present era” merchandise.
These admissions nod to the warnings of AI bears like investor of Massive Quick fame Michael Burry, who has alleged that Nvidia and different chipmakers are overextending the helpful lives of their chips and that the chips’ eventual depreciation will trigger breakdowns within the funding cycle. Nevertheless, Huang has stated that chips from six years in the past are nonetheless working at full tempo.
The corporate additionally nodded explicitly to previous boom-bust cycles tied to “stylish” use circumstances like crypto mining, warning that new AI workloads might create comparable spikes and crashes which can be exhausting to forecast and may flood the grey market with secondhand GPUs.
Regardless of the shortage of a deal, Kress burdened that Nvidia’s relationship with OpenAI stays “a really sturdy partnership,” greater than a decade previous. OpenAI, she stated, considers Nvidia its “most popular companion” for compute. However she added that Nvidia’s present gross sales outlook doesn’t depend on the brand new megadeal.
The roughly $500 billion of Blackwell and Vera Rubin system demand Nvidia has guided for 2025–26 “doesn’t embrace any of the work we’re doing proper now on the subsequent a part of the settlement with OpenAI,” she stated. For now, OpenAI’s purchases circulation not directly by means of cloud companions like Microsoft and Oracle reasonably than by means of the brand new direct association specified by the letter of intent.
OpenAI “does wish to go direct,” Kress stated. “However once more, we’re nonetheless engaged on a definitive settlement.”
Nvidia insists the moat is undamaged
On aggressive dynamics, Kress was unequivocal. Markets currently have been cheering Google’s TPU—which has a smaller use case than the GPU however requires much less energy—as a possible competitor to Nvidia’s GPU. Requested whether or not these varieties of chips, known as ASICs, are narrowing Nvidia’s lead, she responded: “Completely not.”
“Our focus proper now’s serving to all completely different mannequin builders, but additionally serving to so many enterprises with a full stack,” she stated. Nvidia’s defensive moat, she argued, isn’t any particular person chip however all the platform: {hardware}, CUDA, and a continually increasing library of industry-specific software program. That stack, she stated, is why older architectures stay closely used whilst Blackwell turns into the brand new normal.
“Everyone is on our platform,” Kress stated. “All fashions are on our platform, each within the cloud in addition to on-prem.”