17 February 2025 International Approx. 15 minute read

From Bletchley to Paris: The Safety Consensus Comes Apart

In three weeks, the international project that began at Bletchley Park in November 2023 met its first serious test and did not survive it. We work through what the Paris summit, the rescission of Executive Order 14110, and the arrival of DeepSeek R1 mean for clients planning around an international AI regime that no longer exists in the form it did six months ago.

The window between 20 January and 11 February of this year will be read, by anyone writing the history of international AI governance, as the moment the Bletchley consensus broke. Within those three weeks, the new U.S. administration rescinded President Biden's Executive Order 14110 on the day it took office; DeepSeek released R1, a model whose capabilities and training costs surprised even well-informed observers; and the AI Action Summit convened in Paris under a French presidency whose explicit aim was to pivot the summit track from safety toward action. The summit closed with a declaration that the United States and the United Kingdom declined to sign. The breakdown was, by every available measure, complete.

We do not think this is a passing political weather event that the next U.S. administration will undo. The summit track was a fragile construction even in its best months. It depended on three things being simultaneously true: that the leading frontier developers were primarily American and British and willing to accept testing arrangements with government safety institutes; that the European Union's regulatory ambition could be coordinated with rather than against those arrangements; and that catastrophic-risk scenarios involving frontier models were close enough to possibility to support sustained diplomatic attention. The January–February events have weakened all three premises. The track that resulted will need rebuilding from different foundations, and our clients should plan for the period in which the rebuilding has not yet happened.

The rescission

Executive Order 14110, signed in October 2023, was the most comprehensive U.S. federal action on AI. It used the Defense Production Act to require developers of dual-use foundation models above specified compute thresholds to report training runs and the results of red-team safety testing to the Commerce Department; it directed the National Institute of Standards and Technology to develop guidance on AI red-teaming, content authentication, and synthetic-content labeling; it imposed reporting requirements on infrastructure-as-a-service providers in respect of foreign training customers; and it directed almost every federal department and agency to undertake specified AI-related work, from the Office of Management and Budget's procurement guidance to the Department of Energy's evaluation of frontier-model energy demands.

On 20 January 2025, the new administration revoked Executive Order 14110 through Executive Order on Removing Barriers to American Leadership in Artificial Intelligence. The revocation was prospective: NIST's published guidance documents continue to exist as guidance, but their directive force as agency-required outputs of an executive order is removed; the AI Safety Institute that NIST had established remains in place administratively, but its remit and budget posture are unclear; the dual-use foundation model reporting requirements that the Commerce Department had begun to operationalize are no longer required. The new order directs the Office of Science and Technology Policy and the National Security Council to develop, within 180 days, an action plan to achieve and maintain global AI dominance.

We will write separately on the implications of the forthcoming action plan, which is the operative document to watch over the coming months. The immediate point is that, with EO 14110 rescinded, the U.S. federal landscape no longer contains the central instrument that anchored the Bletchley-track diplomatic position. The American delegation that traveled to Paris no longer represented an administration with a published frontier safety policy. It represented an administration that had repudiated the previous one's frontier safety policy and had not yet replaced it.

The arrival of R1

DeepSeek-R1 was released on 20 January, the same day as the rescission. The coincidence of timing is, on our reading, genuine — there is no evidence the release was timed to U.S. political events, and the model's release notes indicate that the work had been completed weeks earlier — but the simultaneity was, politically, significant. R1 is a reasoning-trained model whose published benchmarks place it in the same broad capability band as the leading American frontier reasoning models released the previous quarter, at a small fraction of the disclosed training cost. The model and a permissively licensed weight distribution were released by an entity headquartered in Hangzhou.

It is too soon to assess the long-term significance of R1 with confidence. The published cost figures have been questioned in detail by people whose technical assessment we respect; the relevant counterfactual — what an equivalently resourced effort by an American or European laboratory would have cost — is not knowable. What is clear is that the release weakened a specific premise of the international safety architecture: that frontier capability would emerge first from a small number of laboratories operating in jurisdictions whose governments cooperated through the AI Safety Institute network. The premise survived the existence of other Chinese frontier programs because those programs had not, in public, drawn level with the U.S. frontier. R1 was the first widely circulated artifact suggesting that they had, or were close enough that the gap could not be the basis for regulatory strategy.

The Paris summit's communiqué refusal was, in our reading, not primarily about the summit's substance. It was about a position the new U.S. administration was taking on the legitimacy of the whole multilateral track.

The Paris summit

The AI Action Summit, held in Paris on 10–11 February, was organized under a French presidency that had announced, well before the U.S. transition, that it would refocus the track from safety to action. The summit's declaration — titled the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet — emphasized development access, environmental sustainability, labor transitions, and multilateral cooperation. It contained relatively little on frontier safety. The declaration was signed by sixty-one parties including the host, the European Commission, India, China, and a substantial majority of attendees. It was not signed by the United States or the United Kingdom.

The U.S. position was set out by the Vice President in a speech that is worth reading carefully. The speech made three claims: that excessive AI regulation could kill an industry just as it is taking off; that the United States would not accept arrangements that constrained American frontier laboratories in ways that competitors would not face; and that the export of American AI technology would proceed on terms set by the United States, not by multilateral processes. The U.K. position, which received less press attention, was more technical: the British delegation indicated that the declaration's language on sustainable AI did not adequately address national security concerns, particularly in respect of the governance of advanced AI systems used for security and defense purposes.

Reading the speeches and the declaration together, we think the U.S. and U.K. refusals were doing slightly different work. The U.S. refusal was a stake in the ground against the legitimacy of the multilateral track itself. The U.K. refusal was a more specific complaint about a drafting choice. Both, however, signaled that the Bletchley consensus — under which the two governments whose territory hosted the leading frontier laboratories were also the principal organizers of the summit track — was no longer operative.

What this means for the AI Safety Institute network

One of the more durable products of the Bletchley track was the international network of AI Safety Institutes, now numbering nine member jurisdictions. The network's practical work — pre-deployment evaluation of frontier models against shared evaluation suites; secondment of technical staff between institutes; joint research on evaluation methodology — has been, by reasonable measures, the most concrete output of the international AI governance project to date.

That work is now under uncertain leadership. The U.S. AI Safety Institute's status under the new administration is unresolved at the time of writing. The institute's director resigned in early February. Several of its consortium agreements with frontier developers were executed under EO 14110 and may need to be renegotiated on a different legal basis. The U.K. AI Safety Institute was renamed in February as the AI Security Institute, with a remit that emphasizes national-security applications of advanced AI; its evaluation work continues, but its public posture is now framed less around international safety cooperation and more around domestic security. The EU AI Office, which is not formally part of the institute network but has overlapping functions, has filled some of the diplomatic space; the Commission has signaled that the institute network's evaluation work could be drawn upon to inform Article 55 systemic-risk assessments under the AI Act. Whether the U.S. institute will participate in that arrangement is not yet clear.

The European posture

The European Union arrived in Paris with the AI Act's first substantive obligations — the Article 5 prohibitions — having entered into application eight days earlier, on 2 February. The Commission had also published the principal AI Office guidelines on prohibited practices in the immediately preceding weeks. The European posture at the summit was therefore the most concretely regulatory of any major delegation: the EU is the only large jurisdiction with a binding, in-force, horizontal AI statute, and the only one whose enforcement is now operative rather than prospective.

The Commission used the summit to signal flexibility on the implementation of the AI Act's general-purpose model provisions, which apply from August. The Code of Practice drafting process is in its third plenary round; the Commission has indicated that it expects the Code to be finalized in late April or early May, with the AI Office then assessing it for adequacy as a presumption-of- conformity instrument. The political question that the summit did not resolve is whether the U.S. frontier laboratories that participate in the Code drafting process will sign the Code when it is finalized. Our reading of the conversations around the summit is that several of them are now reconsidering signature, in light of the U.S. administration's posture and the possibility that signing a European Code of Practice could be characterized as capitulation to extraterritorial regulation.


What we are advising

For frontier laboratories with significant EU activity, our advice is unchanged in substance: the AI Act's obligations run with placing on the market, and the political uncertainty around the Code of Practice does not relieve providers of the underlying statutory duties. Non-signature of the Code is a legitimate option, but it shifts the compliance burden from the Code's presumption-of-conformity mechanism to direct demonstration of compliance with the regulation. We think that path is harder, not easier, and we are advising clients accordingly. We expect, on the current trajectory, that most of the major non-EU developers will end up signing — possibly after some public negotiation over specific provisions, particularly the training-data summary template — but the political risk of a fragmented signature pattern is now real.

For developers whose deployment is principally in the United States, the immediate planning problem is the 180-day window in which the new administration is to produce its action plan. The relevant agencies for that drafting are OSTP, the NSC, and the National Economic Council. The drafting process is, on the information available, less open to civil-society and academic input than the EO 14110 drafting was. We are advising clients to invest in direct engagement with the drafting principals where they have credible access, and to prepare for an output document that is likely to emphasize industrial-policy levers — export controls, federal procurement, research funding allocation — rather than the safety-evaluation and reporting architecture that characterized EO 14110.

For clients in the institute network, our advice is to keep the technical work going at the working level even if the political superstructure is unstable. The pre- deployment evaluation programs the institutes have built are the most concrete public goods the Bletchley track produced. They can continue under different political framings — as cybersecurity cooperation, as standards development, as bilateral technical exchange — and they are worth preserving even at the cost of public visibility.

The longer arc

We are not pessimistic about the long arc of international AI governance, but we are realistic about the current moment. The Bletchley track was always going to need a reset; it was built on a premise of frontier-developer geography that is no longer accurate and on a premise of transatlantic policy alignment that the November election foreclosed. What replaces it will, we expect, be a more modular architecture: a European regulatory pole anchored by the AI Act and its institutional appendages; a U.S. industrial-policy pole anchored by the forthcoming action plan and the export-control regime; a Chinese pole whose internal contours we observe but do not advise on; and a working-level technical layer of institute-to- institute and standards-body cooperation that survives the political turbulence above it.

The Paris summit was the first artifact of that more modular architecture, and the artifact made the architecture visible. Clients planning around an international AI safety regime should not expect a return to the Bletchley posture. They should plan for the regime that exists, which is several regimes interacting uneasily, and which will continue to be that way for the foreseeable future.