Meta Avocado: AI Model Delay Exposes the Capability Gap
Meta's Avocado model delayed to May after failing benchmarks against Gemini 3.0. The company eyes licensing Gemini as a stopgap amid a $130B AI spend.
Meta Avocado, the company’s next-generation frontier AI model, has hit a wall. Internally codenamed Avocado, it was on track for a March 2026 release. It will now arrive no earlier than May, and possibly later, after falling short of key internal benchmarks against rivals from Google, OpenAI, and Anthropic.
The delay is more than a scheduling slip. It is a signal that building and sustaining a frontier AI model has become brutally difficult, even for a company that plans to spend up to $135 billion on AI infrastructure in 2026 alone.
What Went Wrong with Avocado
Avocado is a text-based foundational model developed by Meta’s Superintelligence Labs (MSL), the division Meta built out aggressively over the past 18 months in a costly effort to close the capability gap with OpenAI and Google. The model’s internal performance has been a moving target.
According to reporting by the New York Times, Avocado outperforms Meta’s previous generation models and did better than Google’s Gemini 2.5 from March. That is a meaningful result. The problem is Gemini 2.5 from March is not the benchmark that matters anymore. Compared to Gemini 3.0, released in November 2025, Avocado falls short on reasoning, coding, and writing tasks.
That is a roughly six-month capability gap Meta cannot paper over with a press release. The decision to push the release date back reflects internal acknowledgment that shipping a model that trails the current state of the art would do more damage than a delay.
The Benchmark Gap: Where Meta Stands Against Competitors
The frontier moved faster than Meta expected. Google’s Gemini 3.0 landed in late 2025 with performance gains that raised the bar significantly. OpenAI has continued iterating on its own models, retiring GPT-5.1 and pushing into later generations. Anthropic has built out its enterprise coding capabilities at speed, with Claude Code now generating $2.5 billion in annualized run-rate revenue.
Meta’s Llama models remain competitive in the open-weight space, but Avocado was supposed to be something different: a proprietary frontier model that could compete head-to-head with the closed models from the leading labs. It was the centerpiece of Mark Zuckerberg’s public pledge, made in mid-2025, that Meta’s models would push the frontier within the year.
That timeline has now slipped. More telling is the fruit basket waiting in the pipeline: Mango, Meta’s planned image and video generation model meant to rival OpenAI’s Sora, and Watermelon, already positioned as Avocado’s successor. Even Llama 4’s most powerful variant, internally called Behemoth, has been delayed for months. The pattern across multiple model lines suggests this is not one isolated problem but a broader challenge in Meta’s training and evaluation process.
The Gemini Licensing Gambit: Strategic Admission or Tactical Pivot?
The most striking detail to emerge is that Meta’s AI division leadership has discussed temporarily licensing Gemini from Google to power Meta’s AI products while Avocado catches up. No decision has been made, but the fact that the conversation is happening is notable.
Meta licensing its primary competitor’s model would represent a significant strategic admission. It would mean that Meta’s own AI division cannot, at this moment, deliver the model quality its products require. The consumer-facing AI features across Instagram, WhatsApp, and the Meta AI assistant depend on a capable foundational model. Falling behind in model quality means falling behind in product quality.
This dynamic illustrates a broader pressure point in the AI race: the cost of staying at the frontier is not just capital investment. It is the cost of the gap between your current model and the leader’s current model, measured in user experience, developer trust, and enterprise contracts. Yann LeCun’s new venture AMI Labs, launched with over $1 billion in funding, is betting that world models represent the next architecture shift entirely, suggesting the frontier is not even a fixed target.
The Broader Stakes: $130 Billion and the Limits of Spending
Meta projects capital expenditure of $115 to $135 billion for 2026, nearly double last year’s $72.2 billion. That is an extraordinary financial commitment to AI infrastructure, chips, and compute. The Avocado situation underscores a critical truth that compute alone cannot resolve: model quality is not purely a function of money spent. Training decisions, data quality, architectural choices, and evaluation rigor all determine whether a model hits the benchmark or falls short of it.
The broader AI infrastructure arms race has already produced skepticism about whether the capital being deployed will translate into proportional capability gains. Meta’s delay reinforces that concern. Billions in compute did not prevent Avocado from landing in a performance range that trails Gemini 3.0.
For Meta, the path forward requires more than a longer training run. It requires closing the architectural and evaluation gaps that allowed a six-month-old Google model to set a bar Avocado cannot yet clear. The Gemini licensing discussion, if it materializes, would buy time. But borrowed capability is not the same as built capability, and the frontier will not wait for Meta to catch up while Meta rents access to it.