Nvidia GTC 2026: $1 Trillion Inference Bet
Jensen Huang doubles Nvidia's chip revenue forecast to $1 trillion at GTC 2026. OpenAI signs a classified AWS-Pentagon deal after Anthropic is cut off.
At Nvidia GTC 2026, two seismic AI industry moves landed within hours of each other on Tuesday. Nvidia CEO Jensen Huang stood inside a hockey arena in San Jose and doubled his chip revenue forecast to $1 trillion through 2027. Across the country, OpenAI quietly announced it had signed a classified AI contract with the U.S. Pentagon via Amazon Web Services, filling the gap left by Anthropic after that company refused unrestricted military use of its models. The confluence of these two announcements signals something structural: the AI industry is no longer in its experimental phase, and the infrastructure race is accelerating into a national security contest.
For context on how quickly AI procurement has shifted in federal agencies, see our breakdown of the Pentagon’s AI vendor landscape.
Jensen Huang Calls the Inference Inflection at GTC 2026
Huang’s keynote at the SAP Center was, as always, part product launch and part industrial manifesto. The headline number: Nvidia now sees at least $1 trillion in chip revenue opportunity through 2027, up from the $500 billion forecast the company gave on its February earnings call just four weeks ago. Reuters confirmed the forecast, noting Nvidia shares closed up 1.2% on the day after briefly surging. That is not a rounding error. It is a structural shift in how Nvidia is reading demand signals from Blackwell and Rubin purchase orders.
The strategic pivot driving the forecast is inference. “The inference inflection has arrived,” Huang told the crowd. “And demand just keeps on going up.” Training AI models has been Nvidia’s core dominance story for the past three years. Inference, the real-time process of running those models to answer queries, has historically been a space where Nvidia’s GPUs faced sharper competition from custom silicon at Google, Amazon, and startups. Huang’s GTC message was that Nvidia is now moving aggressively into inference hardware and will not cede that market.
The centerpiece announcement was the Vera Rubin system: a new CPU developed by Nvidia, combined with an AI inference accelerator built on technology licensed from Groq. Nvidia paid $17 billion in December for that Groq license, one of the largest technology deals in the semiconductor industry’s history. The Vera Rubin name continues Nvidia’s tradition of honoring scientists, following the Blackwell architecture named after statistician David Blackwell. The hardware is designed specifically to cut latency on inference workloads, where Nvidia’s traditional GPUs have faced cost-per-token disadvantages against purpose-built inference chips.
Nvidia shares closed up 1.2% on the day, paring early session gains. Analysts noted that while the $1 trillion forecast was well above prior guidance, investors remain watchful about whether demand durability will match Huang’s projections at the pace he implied.
OpenAI Steps into the Pentagon Vacuum Left by Anthropic
The second story of the day was quieter in tone but arguably more consequential for the AI industry’s relationship with government.
OpenAI confirmed on Tuesday that it has signed a contract to sell access to its AI models to U.S. defense and intelligence agencies through Amazon Web Services, covering both classified and unclassified work. The deal positions OpenAI as the primary AI provider for Pentagon operations that until recently relied on Anthropic’s Claude models.
Anthropic’s removal from that role began in February. The company had been operating under a Pentagon contract worth up to $200 million, deploying Claude models in classified systems through a Palantir and AWS integration. That relationship ended when Anthropic refused to allow unrestricted military use of its AI, specifically drawing lines around domestic surveillance and autonomous weapons applications. The Pentagon subsequently labeled Anthropic a “supply chain risk” and terminated the contract.
OpenAI is stepping into that gap with a notably different posture. The company has historically focused on unclassified government work, but following its transition to a for-profit structure last fall, it renegotiated its agreement with Microsoft to allow partnerships with rival cloud providers for national security clients. The AWS deal is a direct result of that renegotiation.
The strategic calculus is visible: government contracts, particularly high-stakes classified work, function as trust signals for enterprise buyers. OpenAI made this reasoning explicit in its announcement, noting that public sector validation directly supports corporate sales cycles.
Two Stories, One Structural Shift
Read separately, these are product and contract announcements. Read together, they describe the same underlying transition.
AI infrastructure is no longer a research investment. It is a capital allocation decision made by governments, defense agencies, and sovereign technology buyers. Nvidia’s $1 trillion forecast is only credible if the downstream demand for AI inference at industrial scale is real and sustained. The Pentagon’s urgency to replace Anthropic with OpenAI in classified operations within weeks suggests that AI has moved deep enough into critical systems that supply chain disruptions are treated as national security events.
Anthropic’s refusal to permit unrestricted military use is a principled position with real consequences: it is now outside the most lucrative government procurement channel in the world. Whether that tradeoff is vindicated or penalized over the next three years may say more about AI’s future constraints than any benchmark.
The inference wave Huang described at GTC 2026 needs buyers at scale. The Pentagon just confirmed that those buyers exist and are moving fast.