20 November 2025 U.S. State Approx. 15 minute read

California Returns: SB 53 and the Architecture of Transparency

Fourteen months after the SB 1047 veto, Sacramento has passed a quieter, narrower frontier-AI statute. We read SB 53 — the Transparency in Frontier Artificial Intelligence Act, signed 29 September — as a deliberate answer to the veto memo, and as the most carefully drafted state AI statute to date.

Governor Newsom signed the Transparency in Frontier Artificial Intelligence Act, Senate Bill 53, on 29 September, almost exactly one year after he vetoed SB 1047. The sequencing is not coincidence. SB 53 is the product of the working group the governor convened after the veto, and it reads, on close inspection, as a deliberate answer to the veto memo's specific objections. Where SB 1047 imposed substantive safety obligations on a class of compute-defined models, SB 53 imposes transparency and incident-reporting obligations on a similar class of developers. Where SB 1047 placed a shutdown duty on the developer, SB 53 places a protocol-disclosure duty. Where SB 1047 created a tort cause of action for plaintiffs harmed by a covered model, SB 53 relies on the existing Attorney General's enforcement authority. The result is a narrower statute, both in scope and in legal force. It is also, in our view, a more durable one.

The signing has been received with the usual mix of acclamation and complaint, much of it from the same constituencies that took positions on SB 1047. We think the bill deserves a more careful reading than it has so far received. SB 53 is the first state-level frontier AI statute to pass into law and the first comprehensive U.S. statute of any kind that imposes substantive obligations on developers of frontier models. Its provisions will shape industry practice, federal preemption arguments, and the shape of the next wave of state legislation in non-California capitals. We address the bill's principal features in this viewpoint, and identify the four implementation questions that will determine whether it works.

What SB 53 requires

The bill applies to large frontier developers — defined as developers that have trained at least one frontier model using computing power greater than 10²⁶ integer or floating-point operations and that meet a revenue threshold whose final figure is set in the bill at five hundred million dollars in annual gross revenues — operating in California. The revenue threshold is the principal structural innovation relative to SB 1047. By layering a revenue threshold on the compute threshold, the bill excludes academic and small-developer projects whose training runs cross the compute line but whose commercial operations do not justify the compliance burden. The threshold was set after substantial negotiation with the open-source community and with research universities, and it is one of the reasons opposition to the bill was narrower than opposition to SB 1047.

For developers within scope, the bill imposes four principal kinds of obligation. First, the developer must publish a Frontier AI Framework — a written document describing its approach to assessing and managing catastrophic risk from its frontier models, including how it incorporates national and international safety standards, how it conducts evaluations, what cybersecurity protections it maintains for unreleased model weights, how it manages information security in respect of safety incidents, and how it identifies and responds to dangerous capabilities. The Framework must be updated annually and after any material change to the developer's practices.

Second, the developer must publish, for each new frontier model, a transparency report — a public document setting out the model's identifying information, the date of release, the developer's intended uses and known restrictions, the principal results of safety evaluations conducted prior to release, and a summary of any catastrophic-risk assessment that was conducted. The transparency report obligation does not require disclosure of training data details beyond what the developer chooses to disclose, and it does not require disclosure of detailed evaluation methodologies. It is closer in design to the model card practice that several frontier laboratories have already adopted than to the EU AI Act's training-data summary obligation.

Third, the developer must report critical safety incidents to the California Office of Emergency Services within fifteen days of becoming aware of them. Critical safety incident is defined in the bill to include unauthorized access to model weights resulting in loss of control, an AI model engaging in autonomous action without user authorization that causes injury to a person or property damage exceeding specified thresholds, materialization of a catastrophic risk such as the use of a frontier model to design or synthesize a weapon of mass destruction, and the loss of control of a frontier model resulting in death, bodily injury, or property damage. The Attorney General is to publish anonymized incident summaries annually.

Fourth, the bill establishes whistleblower protections for employees and contractors of frontier developers who report, in good faith, that the developer's conduct presents a specific and substantial danger to public health or safety or constitutes a material false or misleading statement in the Frontier AI Framework or in a transparency report. The protections include the conventional anti-retaliation provisions and a private right of action for retaliation, but they do not extend to the bill's other substantive provisions. There is no general private right of action for breach of the Framework or transparency obligations.

SB 53 is a transparency statute, not a safety statute. It asks developers to publish what they are doing about catastrophic risk. It does not tell them what to do.

What the bill does not do

Three substantive choices distinguish SB 53 from SB 1047 and from the broader category of European-style frontier regulation. First, the bill imposes no substantive safety duty. There is no obligation to take particular precautions before releasing a frontier model; no reasonable-care standard for catastrophic harm; no pre-release approval requirement; no shutdown obligation. The bill takes a position about the importance of transparency about safety practices, but it takes no position about what those practices should be.

Second, the bill creates no Frontier Model Board or analogous administrative body. Enforcement is by the Attorney General, who may bring civil actions for failure to publish a Framework, failure to publish a transparency report, failure to report a critical incident, or retaliation against a whistleblower. Civil penalties are capped at one million dollars per violation. There is no separate licensing or registration regime.

Third, the bill creates no private right of action for catastrophic harm. Persons injured by the operation of a frontier model retain whatever causes of action they have under existing California law — product liability, ordinary negligence, etc. — but the bill itself does not create a new cause of action. This was a deliberate drafting choice and one of the most-negotiated provisions of the bill; several plaintiff-side organizations argued for a private right of action and ultimately accepted its omission as the price of the governor's signature.

These choices, taken together, place SB 53 at the lighter end of the spectrum of plausible frontier AI statutes. The bill's substantive demands on developers are, in our assessment, well within the range of practices most frontier laboratories have already adopted voluntarily. Anthropic publishes a Responsible Scaling Policy that maps closely to the Framework requirement. OpenAI publishes a Preparedness Framework that does the same. Google DeepMind publishes a Frontier Safety Framework. The transparency report requirement is similar in shape to existing model cards. The critical incident reporting obligation is novel in its statutory form, but the substantive practice of incident response is already widely adopted within the industry.

This is not, we think, a coincidence. The bill was drafted in close consultation with frontier developers who had publicly accepted the substantive obligations and whose principal objection to SB 1047 was its enforcement architecture. By moving from substantive duties to transparency obligations, the bill converts what was effectively voluntary practice into law without expanding the practice's footprint. Whether that is a virtue or a vice depends on one's view of the practice the bill codifies.

The federal preemption posture

The bill is unusually attentive to the preemption question. Section 4 of the enacted statute contains a savings clause providing that compliance with substantively equivalent federal disclosure or reporting requirements relieves a developer of the corresponding state-law obligation, and directs the Attorney General to publish, within ninety days of the effective date, a list of federal requirements that satisfy the savings clause. This is a deliberate response to the AI Action Plan's procurement-based regulatory posture: if the forthcoming FAR rule or any other federal instrument imposes substantively equivalent disclosure obligations, the state-law duty is satisfied by compliance with the federal one.

The savings clause does not, however, defeat the bill in the event of federal preemption. The bill's structural design — transparency obligations on California-operating developers, enforced by the California Attorney General — is more difficult to preempt under existing doctrine than a substantive safety regime would be. State transparency requirements have generally survived preemption challenges in adjacent regulatory areas (financial services, pharmaceuticals, environmental). We expect challenges, and we expect the bill to survive them in substantial part.

The administration's likely response is the funding- conditions mechanism described in the AI Action Plan, which we discussed in our 2 September viewpoint. The OMB circular implementing that mechanism is in draft and has not yet been published; the categories of state AI law that will be targeted by funding conditions are not yet defined. SB 53 may or may not be on the eventual list. If it is, the litigation we noted in the prior viewpoint will accelerate.

What SB 53 portends for other states

The political economy of SB 53 is reproducible. The bill's coalition included labor (SEIU California, the California Federation of Labor), a substantial portion of the academic AI safety community, several frontier developers (publicly Anthropic, with conditional or tacit acceptance from others), and the principal civil- rights organizations active in California. The opposition included some open-source community organizations, a small group of free-market policy organizations, and a subset of the venture capital industry. The principal frontier developers' move from opposition (in respect of SB 1047) to mixed or supportive postures (in respect of SB 53) reflects the narrower scope of the bill and the absence of a private right of action.

Other state legislatures are watching. Illinois, New York, and New Jersey each have active proposals that could be amended in the 2026 session to track SB 53's structure. Texas has begun preliminary work on its own transparency statute, with a different political coalition. Colorado's existing SB 24-205 addresses a different regulatory question (deployer-side algorithmic-impact assessment), but is being read by the state's legislature as a model for further developer-side action.

Our expectation is that, within twelve to eighteen months, three to five additional states will have passed statutes closely modeled on SB 53. The patchwork problem that several federal commentators have warned about will, in our view, prove less severe than feared for two reasons. First, the SB 53 model is replicable, and states tend to adopt statutes that have already passed elsewhere; the resulting patchwork will be a patchwork of similar statutes, not a patchwork of conflicting ones. Second, the savings clause in SB 53 is itself a model: if it is widely adopted, federal supersession of the state regime can be achieved through a single federal disclosure standard.

The four implementation questions

We close with the four questions that will determine whether SB 53 works in practice.

First, what counts as a sufficient Frontier AI Framework? The bill provides a list of topics the Framework must address but does not specify how much detail is required. A Framework consisting of three pages of generalities is probably not sufficient; a Framework consisting of fifty pages of operational detail is plainly sufficient. The space between those poles is large, and the Attorney General's enforcement posture in the first year will define it. We expect the office to publish guidance in the first quarter of 2026.

Second, what counts as a critical safety incident? The statutory definition is precise on its face but contains several terms — loss of control, autonomous action without user authorization, materialization of a catastrophic risk — whose application to specific factual patterns is far from obvious. The first incident reports under the bill will, we expect, exercise these definitions in ways the drafters did not specifically anticipate.

Third, how will the whistleblower protections interact with confidential information protections that frontier laboratories impose by contract on employees and contractors? The bill is careful to protect disclosures to specified channels (the Attorney General, the Legislature, designated internal channels), and it does not authorize disclosures to the press or to social media. The interaction between the bill and the standard confidentiality agreements used at frontier laboratories is, however, not fully specified. We are advising clients to revise their confidentiality agreements to incorporate the bill's protections explicitly.

Fourth, how will the bill interact with the EU AI Act's general-purpose model provisions, which apply to many of the same developers and which have been in application since 2 August? The two regimes are not duplicative — the EU regime is more demanding on training-data disclosure; the California regime is more demanding on safety framework publication — and the operational task for multinational developers is to produce a single compliance posture that satisfies both. The work is tractable but not trivial, and we are engaged on it with several clients.


Concluding observation

SB 53 is, in our reading, the most consequential AI statute passed by any U.S. legislative body to date. It is also a deliberately narrow statute, structured to survive the political and constitutional pressures that defeated SB 1047. The bill's substantive demands on developers are modest by international standards; its transparency architecture is more demanding than anything that has been enacted at the federal level; and its enforcement posture is calibrated to the institutional capacity of the California Attorney General's office.

For clients, the implications are concrete. Frontier developers operating in California should be drafting their Frontier AI Framework now; the bill becomes effective on 1 January 2026, and the Framework must be published before any covered model is offered. The transparency report obligation runs to all new models; the documentation work for that obligation should be built into the model release process, not treated as a post-release compliance task. The incident reporting obligation should be incorporated into existing incident response runbooks. The whistleblower protections require changes to standard employment agreements.

A year ago, this firm wrote that the SB 1047 veto was not the end of state-level frontier AI legislation but the end of one approach to it. SB 53 is the next approach. We expect it to be widely imitated, durably enforced, and consequential for industry practice in ways that the federal landscape, for the foreseeable future, will not match. Clients planning around the federal Action Plan should not neglect the state regime that has formed in parallel. The federal lever is procurement. The state lever is transparency. They are not substitutes.