Meta's Superintelligence Push: $135B, AMD Deal, New Org
Meta restructures its AI org, signs a 6-gigawatt AMD chip deal, and targets $135B in spending to race toward personal superintelligence in 2026.
Meta’s superintelligence push is no longer a distant ambition. Meta is not hedging. In a single week, the company leaked an internal memo revealing a new applied AI engineering organization built for its superintelligence push, announced a 6-gigawatt GPU deal with AMD worth up to $100 billion, and reminded the market it plans to spend up to $135 billion on AI in 2026 alone, nearly double its total capital expenditure from the prior year. The message from Menlo Park is unambiguous: Meta intends to win the superintelligence race, and it is building the infrastructure, the organization, and the chip supply chain to do it.
This is not a pivot. It is an escalation of a strategy that has been building quietly since last summer, when Meta created its Superintelligence Labs unit and hired Alexandr Wang, the former CEO of Scale AI, to lead it. What is new this week is the scale of the commitment and the organizational blueprint that will execute it.
The New Applied AI Engineering Org
The organizational move is the one that reveals the most about how Meta thinks about the race it is in.
According to an internal memo reported by Business Insider, Meta is forming a new Applied AI Engineering organization within its Reality Labs division. The group will be headed by Maher Saba, a vice president at Reality Labs who oversees products including Meta’s AI-powered smart glasses. Saba’s organization will report directly to Chief Technology Officer Andrew Bosworth.
The mandate is specific: this team will build “the data engine that helps our models get better, faster.” The memo explicitly frames the new org as the bridge between raw research capabilities and market-leading AI models. As Saba wrote internally, “building great models isn’t just about researchers and compute.” The applied engineering layer (the tooling, the data pipelines, the training infrastructure) is where capability translates into competitive product.
The structure of the new org is itself a signal. Teams within the organization will operate with manager-to-employee ratios of up to 1:50. That is an unusually flat configuration by any standard in enterprise technology. A single manager overseeing 50 individual contributors eliminates multiple layers of middle management, accelerates decision-making, and pushes accountability directly to the people doing the technical work.
Zuckerberg has been explicit about this philosophy. During Meta’s most recent earnings call, he told investors the company is “elevating individual contributors and flattening teams,” adding that projects that “used to require big teams” can now be accomplished by a single talented person. He pointed to AI-assisted development as the accelerant that makes this possible. The new Applied AI Engineering org is the first visible structural implementation of that thesis at scale.
Nvidia’s CEO Jensen Huang runs a similar philosophy, with over 30 direct reports himself, and it has produced one of the most execution-focused organizations in the industry. Meta is betting the same approach can compress the timeline between research output and deployed product.
Meta Superintelligence Labs and the Avocado Model
The new Applied AI Engineering org does not operate alone. It is a partner to Meta Superintelligence Labs, the unit Zuckerberg established in mid-2025 after a significant strategic move: acquiring a $14.3 billion stake in Scale AI and recruiting Alexandr Wang to join Meta as Chief AI Officer.
Wang, who built Scale AI into the dominant AI data labeling and evaluation company, brings a specific capability set to Meta. Scale AI’s core business was producing high-quality training data and evaluation infrastructure at scale. That is precisely the function Meta Superintelligence Labs needs to train competitive frontier models. The Applied AI Engineering org, under Saba, will feed that process.
The flagship output is a model currently known internally as Avocado. Reports indicate Avocado is designed to compete directly with GPT-5 and Gemini 3 Ultra, the top-tier frontier models from OpenAI and Google DeepMind. The model was originally scheduled for release at the end of 2025 but was delayed into early 2026 after performance testing identified gaps requiring additional training optimization.
Wang has spoken publicly about his vision for what superintelligence means at Meta. The goal is not a research artifact but a consumer product: a personal superintelligence that functions as a digital second brain for individual users. This framing puts Meta’s AI ambitions in direct competition with OpenAI’s ChatGPT franchise and Google’s Gemini integration across Search and Workspace, but with Meta’s unique distribution advantage across WhatsApp, Instagram, and Facebook, which collectively reach more than 3 billion active users.
The AMD Deal: 6 Gigawatts Across Multiple GPU Generations
The organizational restructuring would mean little without the compute to back it up. On Monday, AMD and Meta announced what may be the largest AI chip supply agreement announced to date: a 6-gigawatt, multi-generation partnership covering AMD Instinct GPUs and EPYC CPUs.
The first deployment under the agreement will use a custom AMD Instinct GPU based on the MI450 architecture, purpose-built for Meta’s workloads at gigawatt scale. AMD’s rack-scale architecture, called Helios, was co-developed with Meta through the Open Compute Project and is designed for exactly this kind of hyperscale deployment. The 6th Gen EPYC CPUs, codenamed Venice, will run alongside the GPUs. Shipments are scheduled to begin in the second half of 2026.
The financial structure of the deal is notable. AMD has issued Meta a performance-based warrant for up to 160 million shares of AMD common stock, structured to vest as specific GPU shipment milestones are achieved. The first tranche vests with the initial 1-gigawatt deployment; subsequent tranches vest as purchases scale toward 6 gigawatts. Additional vesting conditions are tied to AMD stock price thresholds and Meta achieving technical and commercial milestones.
This structure is unusual in chip supply agreements. It aligns AMD’s equity incentives directly with Meta’s deployment pace and creates a financial bond between the two companies that extends well beyond a standard purchase contract. For AMD CEO Lisa Su, the language was direct: this deal places AMD “at the center of the global AI buildout.”
Zuckerberg framed it in terms of Meta’s strategic priorities: “We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence. This is an important step for Meta as we diversify our compute.” The diversification language is deliberate. Meta has historically leaned heavily on Nvidia hardware. The AMD deal signals an intentional effort to reduce single-vendor dependency as GPU supply constraints continue to define competitive timelines across the industry. As explored in The Inference Economy, the shift toward inference-optimized compute is reshaping how AI companies structure their chip relationships.
$135 Billion: What That Number Actually Means
Meta’s capital expenditure guidance for 2026 (up to $135 billion) sits at nearly double its total capex from 2025. To put that figure in context: it exceeds the GDP of most countries, and it is being deployed by a single company into a single strategic objective within a single fiscal year.
The spending breaks across several categories. Data center construction and buildout represents the largest portion, with Meta continuing to expand its global infrastructure footprint to support both training and inference at scale. Chip procurement is the second major bucket; the AMD deal alone could account for tens of billions depending on deployment pace. The remainder flows into software, tooling, energy infrastructure, and talent acquisition.
For AMD and its infrastructure partners, the numbers translate directly into revenue. AMD’s data center revenue grew 39% year over year in the fourth quarter of 2025. Its guidance calls for data center revenue to grow by more than 60% annually over the next three years. Meta’s multi-generation commitment is a foundational component of that outlook.
The spending also signals confidence in Meta’s core advertising business, which continues to generate the cash flows that fund the AI buildout. Meta’s AI-powered ad targeting has been a significant revenue driver over the past two years, creating a feedback loop: AI improves ad performance, ad revenue funds AI investment, AI investment improves ad performance further. That loop is what allows Meta to underwrite $135 billion in capex while remaining profitable.
How Meta’s Strategy Differs From OpenAI and Google
Meta’s superintelligence strategy is structurally distinct from its two primary competitors in ways that matter.
OpenAI raised $110 billion at a $730 billion valuation in March 2026, but its model is built on API revenue, enterprise contracts, and consumer subscriptions. It does not own the distribution network that Meta controls. OpenAI must acquire users; Meta already has them. The challenge for Meta is converting that distribution into AI product engagement at a level that rivals ChatGPT’s mindshare, a harder task than it appears, given how deeply ChatGPT is embedded in user workflows. For a detailed analysis of OpenAI’s capital position, see OpenAI Closes $110B Round at $730B Valuation.
Google has the deepest integration of AI into a consumer product through Search, and its DeepMind research organization rivals any lab in the world for fundamental capability. But Google’s AI strategy is constrained by the cannibalization risk: every query answered by Gemini in Search is a query that does not return an ad click. Meta faces no equivalent structural tension. AI that keeps users more engaged in WhatsApp or Instagram generates more ad impressions, not fewer.
Meta’s core bet is that personal superintelligence at consumer scale, delivered through apps that billions of people use daily, is a more defensible position than model API revenue or search integration. The flat org structure, the AMD chip supply, the Avocado model, and the $135 billion capex are all components of the same thesis.
The Execution Risk
Meta’s ambition is not matched by a flawless execution record in AI. Llama 4, the company’s prior flagship open model, was received poorly when it launched, with benchmarks that underwhelmed relative to GPT-4o and Gemini 1.5. The Avocado delay from late 2025 into early 2026 suggests training challenges remain. Building a flat 1:50 organization at scale has never been done in an AI research context, and there is a meaningful question about whether the model that works for Nvidia’s hardware sales org translates to an AI research and engineering function.
The talent picture also introduces risk. Reports published this week noted that despite Meta’s ability to deploy hundreds of billions in capital, it has struggled to retain some key personnel. The compensation packages required to attract top AI researchers are extraordinary, and the competition from OpenAI, Google, Anthropic, and a generation of well-funded startups has never been more intense.
What Meta has that most competitors do not is time and financial endurance. With $135 billion committed and Zuckerberg’s ownership structure insulating the company from short-term investor pressure, Meta can absorb delays and iterate at a pace that would threaten a less capitalized organization. The question is whether that endurance translates into the model quality and product experience needed to challenge the incumbents by the end of 2026.
What to Watch
The next six months will reveal whether Meta’s organizational bet is paying off. The Avocado launch, whenever it arrives, will be the first direct test of what the Superintelligence Labs team can produce at the frontier. The first gigawatt of AMD shipments in H2 2026 will signal whether the chip partnership is on schedule. And the reception of whatever consumer AI products Meta builds on top of that infrastructure will determine whether the $135 billion is generating returns.
The scale of the commitment leaves no room for a quiet retreat. Meta has declared its intention loudly and funded it aggressively. The industry will find out within months whether the execution matches the ambition.