After the Veto: SB 1047 and the Limits of State-Level Frontier Governance
Governor Newsom's veto of California's frontier safety bill, announced on 29 September, turned on a narrower argument than the headlines suggested. We read the memo carefully, and ask what it means for the wave of state legislation that is coming next.
On 29 September, Governor Newsom vetoed Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, ending nine months of the best-organized state-level frontier safety effort the United States has seen. The decision was widely anticipated — the governor had signaled his concerns during the legislative session — but it has been less carefully read than it deserves. The veto memo is a short document, but it is a precise one. It does not reject the idea of state regulation of advanced AI. It rejects the architecture that SB 1047 adopted to do it. The distinction will matter for what Sacramento, and other state capitals, attempt next.
We think the SB 1047 episode is consequential for three reasons. First, it produced the most detailed legislative record on what state-level frontier governance looks like when it is taken seriously, and that record will be the starting point for the next iteration. Second, the veto memo's reasoning identifies, in a careful way, the architectural defect that the next bill will need to fix. Third, the political coalitions that formed around the bill — for and against — are still in place, and they will reassemble, with adjustments, in the 2025 session and thereafter. The veto is not the end of state frontier AI legislation. It is the end of one approach to it.
What the bill did
SB 1047 applied to a class of covered models defined by computational scale. As enrolled, the bill covered models whose training used a quantity of computing power greater than 10²⁶ integer or floating-point operations, the cost of which exceeded one hundred million dollars when calculated using the average market price of cloud compute at the start of training. The threshold was designed to capture the frontier as it existed in 2024 — the largest models of OpenAI, Anthropic, Google DeepMind, Meta, and a small number of others — and not to reach the long tail of smaller systems. A separate, lower threshold applied to derivative models produced by fine-tuning a covered model where the fine-tuning compute exceeded an annually updated figure that, in the enrolled version, was set at three times 10²⁵ operations.
For covered models, developers would have been required to take three principal kinds of action before training began, during training, and prior to commercial use or public release. They would have had to implement and maintain a written safety and security protocol describing how the developer would assess and manage the risk of critical harm from the model; subject the model to testing sufficient to provide reasonable assurance that it does not pose an unreasonable risk of causing or materially enabling a critical harm; and implement a full-shutdown capability — the kill switch that became the bill's most-mocked provision, although the bill's drafting in fact required shutdown only of computing under the developer's control and recognized that already-released model weights could not be recalled.
Critical harm, in the bill, was carefully defined. It meant the creation or use of a chemical, biological, radiological, or nuclear weapon causing mass casualties; a cyberattack on critical infrastructure causing large-scale damages; or specified other catastrophic harms involving autonomous conduct or large monetary losses, all subject to a threshold of at least five hundred million dollars in damages or comparable injury to public safety. This is, by any reasonable reading, a narrow category. The bill was not addressed to the ordinary harms — defamation, bias, dignitary injury, job displacement — that animate much of the contemporary AI policy conversation. It was addressed to the small set of worst-case scenarios that animate the safety conversation inside the frontier laboratories themselves.
What the veto said
The veto memo runs to roughly nine hundred words. It is worth reading in full. Its essential argument is not that frontier AI does not warrant safety regulation; the memo expressly concedes that it does. The argument is that SB 1047 applied its safety requirements on the basis of compute thresholds — that is, by reference to the scale of training rather than to the capabilities or deployment context of the resulting model. In the governor's words, the bill applies stringent standards to even the most basic functions, so long as a large system deploys it, and does not take into account whether a model is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data.
That is, on its face, a reasonable critique. It is also a somewhat motivated one. The bill's drafters were explicit about why they used a compute threshold: it is auditable, it is forward-looking, and it does not require regulators to second-guess a developer's claims about a model's capabilities. The alternatives — capability-based triggers, use-case triggers, deployment-context triggers — each involve a regulator making judgments that frontier developers are far better placed to make. The compute threshold was a deliberate choice to put the regulatory burden in a place where it could actually be enforced.
The veto memo says that compute is the wrong proxy for risk. It does not say what the right proxy is.
The memo gestures at a more sophisticated approach but does not specify it. It refers to the work being done by the Working Group on AI Frontier Models that the governor convened during the legislative session, and it commits the administration to advancing legislation based on empirical, science-based analysis of capability and attendant risk. That formulation is doing a lot of work. The empirical study of capability and risk is precisely the project of the AI Safety Institutes, of the OECD AI working party, and of the frontier laboratories' own evaluation teams. None of those bodies has produced a regulatory-grade taxonomy of capability and risk that could be operationalized in a statute. It is unclear how a California legislative drafting team in 2025 will produce one.
The coalitions
One useful effect of the SB 1047 process was that it forced the AI policy community to take public positions. Among the frontier developers, Anthropic adopted a conditional support posture after the August amendments that removed the most contentious provisions (the reasonable assurance standard was softened to reasonable care, the Frontier Model Division was replaced with a Board of Frontier Models within the existing Government Operations Agency, and personal criminal liability for executives was struck). OpenAI, Meta, and Google opposed, with varying degrees of vehemence. xAI's principal shareholder publicly supported. Outside the frontier labs, the academic AI safety community was largely in favor; the open-source community was largely against; and the tech-policy commentariat was split in a way that did not map cleanly onto any prior alignment.
The civil-society and labor coalitions that the bill attracted are the ones that will shape what comes next. The California Federation of Labor and SAG-AFTRA both supported. The Electronic Frontier Foundation opposed. Several of the largest immigrant-rights and civil-rights organizations took no position. The shape of this coalition suggests that the next iteration of state AI legislation will be less narrowly focused on catastrophic risk and more attentive to deployment harms — content moderation, labor displacement, civil-rights impacts — that already animate state-level activity in adjacent areas.
Why this matters beyond California
The California legislature is not the only state moving on AI. Colorado passed SB 24-205, an AI consumer-protection statute, in May; Illinois has amended its Human Rights Act to address automated decision-making in employment; New York and New Jersey have active proposals. The pattern of state activity is, however, divided into two streams. One stream — the Colorado model — addresses the deployment of AI in consequential decisions about housing, employment, and credit, and applies obligations to deployers rather than developers. The other stream — the SB 1047 model — addresses the development of frontier models themselves, and applies obligations to a small number of large developers. These two streams have largely different regulatory rationales, different constituencies, and different technical demands.
The veto effectively pauses the second stream for the next legislative cycle. The first stream is unaffected and, we expect, will continue to expand. For deployers of AI in consumer-facing applications, the compliance map is now a patchwork of state algorithmic-impact statutes that is already more demanding, in cumulative effect, than the federal landscape. We will return to this question in a separate viewpoint; the SB 1047 episode should not distract clients from the deployer-side regulation that is moving without much resistance.
What the next California bill will probably look like
We do not yet have visibility into the drafting that the governor's working group is doing, but the veto memo gives reasonably strong indications of the shape of the next bill. It will likely retain a developer-side compliance regime — the governor did not signal that he objects to developer-side regulation in principle, only to the particular trigger SB 1047 used — but the trigger will probably be hybridized. We expect a structure in which a capability-based or deployment-context criterion is added to the compute threshold, perhaps in the form of a designation power vested in an administrative body. We also expect the duties to be more closely modeled on existing model-evaluation practice at the U.S. AI Safety Institute and the U.K. AI Safety Institute, which would reduce the bill's exposure to the criticism that it is requiring assessments that no one knows how to do.
We expect the new bill to retain whistleblower protections, which were among the least controversial provisions of SB 1047 and which addressed a real gap in California labor law for employees of frontier laboratories. We expect transparency provisions — model evaluation result disclosures, training data summaries, potentially modeled on the EU AI Act's Article 53 requirements — to be more prominent. And we expect the kill-switch language to be either removed or rewritten in terms that make clear that the obligation runs only to pre-release weights and to the developer's own infrastructure.
Three observations for clients
First, the federal preemption argument that some frontier developers made against SB 1047 has not gone away, and it will be made again against the next bill. We think the argument is weaker than its proponents claim. There is no federal AI statute to preempt against, the Biden administration's Executive Order 14110 does not occupy the field in a manner that would support implied preemption, and the Supreme Court's recent administrative-law decisions have, if anything, made implied preemption arguments harder. The argument is best saved for situations in which it has a plausible legal foundation. SB 1047, in our view, was not one.
Second, the SB 1047 process produced a usable record of what frontier developers can and cannot commit to in writing. The voluntary commitments many developers made to the White House in July 2023, the safety policies several have published since, and the evaluation protocols developed with the U.K. AI Safety Institute all exist in public. The next iteration of state legislation will use that record. Developers should expect, and prepare for, legislation that asks them to do in California what they have publicly committed to doing voluntarily.
Third, federal action is now less likely, not more. The veto removes some of the pressure on Congress to act, and the November election will, regardless of outcome, slow the federal AI regulatory process for some months. State legislatures will fill the gap. The result is a near-term compliance environment in which the most demanding developer-side AI obligations applicable to U.S. operations may come not from Washington but from Sacramento, or Springfield, or Albany. That is an awkward equilibrium for a technology whose deployment is national. It is also the equilibrium our clients should plan for.