California vs. Washington: The AI Governance Split
Newsom's AI vendor bias order and the Anthropic court injunction create two competing regulatory frameworks for responsible AI procurement in America.
California vs. Washington on AI policy became a concrete split last week when two government actions created competing regulatory frameworks. On March 26, a federal judge blocked the Pentagon from labeling Anthropic a supply-chain security risk. On March 30, California Governor Gavin Newsom signed an executive order requiring AI vendors seeking state contracts to demonstrate safeguards against bias, discrimination, and surveillance. Together, these moves expose a widening gap between federal AI policy driven by national security priorities and state-level governance focused on accountability and harm prevention.
The practical consequence is that AI companies now face two divergent compliance regimes with contradictory incentives. One penalizes safety constraints. The other mandates them.
What Newsom’s Order Actually Requires
The executive order directs the California Department of General Services and the Department of Technology to develop new vendor certification standards within 120 days. Companies seeking state contracts will need to attest to responsible AI governance, demonstrate policies preventing harmful bias, and prove safeguards against unlawful discrimination, detention, and surveillance. They must also show controls preventing the distribution of illegal content.
The order contains a pointed provision aimed directly at the federal government. If Washington designates a company as a supply-chain risk, California will conduct its own independent assessment. If the state concludes the designation lacks merit, the company can remain a California contractor. That clause transforms California from a passive recipient of federal procurement decisions into an active counterweight.
For context on the federal designation that triggered this clause, see our coverage of the Anthropic vs. Pentagon hearing.
The Federal Injunction That Set the Stage
Five days before Newsom’s order, U.S. District Judge Rita Lin issued a preliminary injunction halting the Pentagon’s supply-chain risk designation of Anthropic. Judge Lin found the government’s actions were likely unlawful and appeared designed to punish the company for exercising its First Amendment rights by publicly criticizing military AI usage. She described the administration’s moves as “Orwellian” and noted that supply-chain risk designations are typically reserved for foreign adversaries, not American companies that refuse to remove safety guardrails.
The injunction prevents the government from enforcing its ban while the lawsuit proceeds, though it was stayed for seven days to allow an appeal. The ruling establishes a legal precedent: the government cannot use procurement designations as a weapon against companies that maintain AI safety limits.
California’s executive order extends this logic from the courtroom to the procurement office. Where the judge said the federal government overstepped, Newsom is building an alternative framework that rewards the exact behavior the Pentagon tried to punish.
Two Frameworks, Opposite Incentives
The collision is structural, not partisan. The federal approach under the current administration treats AI safety constraints as potential operational liabilities. An AI system that refuses commands in a military context is, from this perspective, a risk. The Pentagon’s position during the Anthropic hearing was that a model unwilling to adapt for defense use cases creates gaps in war-fighting capability.
California’s framework treats the absence of safety constraints as the risk. Under Newsom’s order, a vendor that cannot prove bias safeguards and anti-surveillance controls will not win state business. The state is effectively constructing a positive feedback loop: companies that invest in responsible AI practices get access to the largest state procurement market in the country.
The practical impact on AI companies is immediate. A firm like Anthropic, which maintained safety red lines and lost federal contracts as a result, now has California explicitly signaling that those same red lines qualify it for state business. Conversely, a company that strips guardrails to satisfy Pentagon requirements may find itself locked out of California procurement.
This two-track system creates compliance complexity. But it also creates market incentives. California’s GDP exceeds $4 trillion. Its technology procurement budget is substantial. For many AI vendors, state contracts are as strategically important as federal ones.
The Regulatory Stack Underneath
Newsom’s executive order does not exist in isolation. It sits atop a growing stack of California AI laws that took effect on January 1, 2026. The AI Transparency Act (AB 853) and the Frontier AI Transparency Act (SB 53) require detailed labeling, provenance disclosure, and risk governance from AI developers. The Generative AI Training Data Transparency Act (AB 2013) mandates public disclosure of training data sourcing. AB 316 closes the liability gap by preventing developers from claiming an AI system acted autonomously to avoid accountability.
Together, these laws create the densest state-level AI regulatory framework in the United States. The executive order adds procurement leverage to what was already a significant compliance burden for AI vendors operating in California.
Other states are watching. As we covered in our analysis of the global AI regulation race, regulatory fragmentation is accelerating. New York, Illinois, and Colorado have all advanced AI governance proposals in 2026. The White House AI framework attempted to preempt state action, but California’s order suggests that effort has failed. If California’s procurement framework produces measurable results in reducing AI-related harms, it becomes a template for state-level regulation nationwide.
What Comes Next
The immediate timeline is defined by two clocks. The federal government has seven days from the March 26 injunction to appeal Judge Lin’s ruling. If the appeal fails or is not pursued, the supply-chain risk designation against Anthropic collapses, removing the federal government’s primary lever against AI companies that maintain safety limits. Meanwhile, California’s 120-day certification development period puts new vendor standards in place by late July 2026.
The deeper question is whether this state-federal split becomes permanent. If the current administration continues to treat AI safety constraints as procurement liabilities, and states like California continue to treat them as requirements, the AI industry will operate under two fundamentally different regulatory philosophies. Companies will need to choose which market to optimize for, or find ways to satisfy both.
For now, the message from Sacramento is clear. Safety is not a liability. It is a qualification.