Eight Months In: What We Have Learned About GPAI Compliance
The EU AI Act's Article 53 and 55 obligations applied from 2 August 2025. The Code of Practice has signatories and holdouts; the AI Office has issued its first conformity opinions; the systemic-risk list has its first surprises. We audit the eight months of implementation.
The general-purpose AI model provisions of the EU AI Act — the chapter we wrote about in our June 2024 viewpoint as the most consequential and the most under-specified part of the regulation — have now been in application for nearly eight months. The trigger date of 2 August 2025 has been met by the AI Office with the publication of three principal instruments: the General-Purpose AI Code of Practice in its finalized June 2025 form; a template for the training-data summary required by Article 53(1)(d); and a first round of guidance on systemic-risk designation under Article 55. The Office has also published its first two formal conformity opinions on Code signatories and has opened one early-stage investigation, the subject of which is not yet public. The body of operational practice that has formed in these eight months is the most concrete output the regulation has yet produced. It is also, in important respects, not the body of practice that the regulation's drafters appear to have anticipated.
We audit the implementation in five sections: the signatory pattern of the Code of Practice; the training-data summary template and its operational consequences; the systemic-risk designations and the surprises they have produced; the AI Office's enforcement posture in its first months; and the looming compliance problem for the high-risk regime that applies from August 2026. The picture is, on the whole, more orderly than the most pessimistic predictions of a year ago, but it is meaningfully less orderly than the AI Office's own communications would suggest. We close with an assessment of where clients should be focusing their compliance energy in the next six months.
The Code of Practice: signatories and holdouts
The Code was finalized at the end of June 2025, after four plenary rounds and a final compromise text that narrowed the training-data documentation requirements, softened the model-evaluation provisions, and clarified the systemic-risk-mitigation obligations. It was opened for signature on 1 July. By the 2 August trigger date, it had been signed by eight major general-purpose model providers, including most of the European and several of the principal American developers. By the end of October, the signatory list had reached fifteen. By the end of January 2026, it had reached twenty-two.
The composition of the signatory list is more interesting than its size. Among the principal frontier developers, four of the six largest have signed; the two that have not signed have published their own compliance attestations describing how they intend to meet the underlying Article 53 and 55 obligations without relying on the Code. The AI Office has indicated that those attestations are under review and that no formal position has been taken on whether they constitute compliance. This is the central unresolved question of the first year: whether non-signature of the Code is, in practice, a workable compliance posture.
Our reading, based on the conversations we have had with the AI Office's general-purpose model unit and with counsel for several non-signatories, is that the Office is not yet prepared to make a definitive judgment. Non-signatories are being treated as compliant for the moment on the basis of their published attestations, but the Office is conducting a more detailed examination than the Code-based presumption affords to signatories. Our expectation is that the Office will eventually take a position — probably in the second half of 2026 — and that the position will be that non-signature is compatible with compliance only where the alternative compliance program is at least as demanding as the Code's. That standard will be difficult to meet for non-signatories who have used the non-signature posture as a way of avoiding particular Code provisions rather than as a way of substituting different ones.
Non-signature of the Code of Practice is, in principle, a workable compliance posture. In practice, the AI Office is treating it as the harder path. We expect that posture to harden over the next twelve months.
The training-data summary template
The Article 53(1)(d) training-data summary template, the operational instrument we identified in June 2024 as among the most consequential outputs the AI Office would produce, was published in May 2025 and has now been in use for nine months. The template was less demanding than rightholders had hoped and more demanding than developers had hoped. It requires identification of the principal data collections used in training, broken down by category (publicly available web content, licensed content, content from data brokers, internally generated content, etc.), with each category accompanied by sufficient information to permit rightholders to assess whether their works are likely to have been included. It does not require disclosure of individual titles, of training tokens by source, or of the specific pre-processing pipeline used. It does require disclosure of the policies the developer applies to honor the Article 4 DSM Directive opt-out, and of the means by which rightholders may communicate the opt-out to the developer.
The template has now been used by twenty-four developers in respect of forty-one general-purpose models. The resulting summaries are publicly available on the AI Office's website, and have generated a body of comparative data on training-data sourcing that did not, until last year, exist. A small number of patterns are already worth flagging. First, the largest commercial developers are sourcing roughly forty to sixty percent of their training data from licensed sources, with the remainder from web content; the open-source developers reporting under the regime are sourcing closer to ninety percent from web content. Second, the principal licensing counterparties have begun to be identifiable from published summaries — major publishers, several stock-media providers, several code-repository hosts — even where the contractual terms are confidential. Third, a small but growing number of rightholders have begun exercising the Article 4 opt-out in machine-readable form, and the AI Office's reporting indicates that compliance with the opt-out is variable across signatories.
The unresolved questions on the training-data side are now downstream of the template. They include: what happens to a model whose training summary indicates reliance on a source that turns out, after the fact, to have been improperly licensed; whether the sufficiently detailed summary obligation is met by a template that does not separately address synthetic training data generated from earlier models; and how the regime interacts with the U.S. Bartz line of authority, which is increasingly the operative legal framework for U.S. training-data risk. These questions will be material in the next twelve months.
Systemic-risk designation
Article 55 imposes additional obligations on general-purpose models with systemic risk. The designation is automatic for models above the 10²⁵ FLOPs cumulative training compute threshold and discretionary under the AI Office's Article 52 power for models that do not cross the threshold but exhibit equivalent capabilities. The Office's first round of designations, published in October 2025, included the eleven models we expected (the flagship models of the leading laboratories at the time of the trigger date) and one we did not. A frontier model released in December 2024 with training compute below the threshold was designated on capability grounds, on the basis of evaluation results conducted by the EU AI Office in cooperation with two member-state institutes. The designation was contested by the model's provider and was the subject of the Office's first quasi-adjudicative procedure under Article 52(5).
The procedure concluded in February 2026 with the designation affirmed. The reasoned decision is the most detailed public statement of the Office's view on what capability triggers systemic-risk designation, and it is worth reading in full. The principal threads are: the model's performance on a defined set of dangerous- capability evaluations exceeded specified thresholds; the model's downstream reach (measured by API calls, number of distinct deployers, and qualitative measures of integration into downstream systems) was sufficient to create plausible systemic effects; and the developer's published safety practices, while extensive, did not include several elements the Office regards as essential, including red-team evaluations conducted by external parties and a serious-incident reporting protocol meeting Article 55 requirements.
The case has been read, in the European tech-policy community, as a signal that the Office is willing to use the Article 52 discretionary power assertively. We agree with that reading. The more important signal, in our view, is that the Office is doing so on the basis of capability evaluations conducted by member-state institutes rather than by the Office itself, and that the evaluations are functioning as a quasi-adjudicative record. The institutional infrastructure for capability- based regulation, which we worried in our February 2025 viewpoint had been damaged by the breakdown of the Bletchley track, has reconstituted itself in the European context. That has implications for the international architecture we discuss below.
Enforcement posture
The AI Office has, in eight months, opened one formal investigation under the general-purpose model regime. The subject is not yet public and the Office has been careful to indicate that the investigation is at an early stage. The principal lever the Office has used, however, is not formal investigation but informal information request. The Office has issued, by our count, between sixty and eighty written information requests to general-purpose model providers since August. The requests have ranged from inquiries about training-data summary completeness to detailed questions about evaluation methodology. Compliance has, on the whole, been forthcoming; providers have learned that non-cooperation with informal requests tends to convert them into formal proceedings.
The enforcement posture is, in this respect, very different from the GDPR enforcement posture that many counsel imported their expectations from. The DPA enforcement model has been characterized by long pre-decisional investigations, member-state coordination delays, and a small number of headline decisions. The AI Office's posture is, so far, more hands-on and faster-moving. The Office is also smaller and more centralized than the EDPB/DPA architecture, which permits speed but creates capacity constraints. We expect the capacity constraints to become more visible over the next eighteen months as the high-risk regime adds to the Office's workload.
The high-risk regime, coming next
The bulk of the AI Act's substantive provisions — the high-risk regime in Chapter III — applies from 2 August 2026. That is just over four months away. Our impression from client work and from market-wide engagement is that preparation for the high-risk regime is meaningfully less advanced than preparation for the general-purpose regime was at the equivalent stage in 2024–2025. There are three reasons for this.
First, the population of regulated entities is much larger. The general-purpose regime affects fewer than thirty firms in the European market. The high-risk regime will affect, by our estimate, several thousand deployers across the eight Annex III high-risk categories (employment and worker management; education and vocational training; access to essential private and public services; law enforcement; migration, asylum and border control; administration of justice and democratic processes; biometrics; and critical infrastructure management). Many of these deployers are small or medium-sized firms whose compliance capacity is limited and whose awareness of the Act's obligations is, in our experience, partial at best.
Second, the conformity-assessment infrastructure is not yet adequate. The high-risk regime relies on notified bodies to conduct third-party conformity assessment for certain high-risk systems, and on the harmonized standards process to specify what conformity requires for others. The notified-body designation process has been slower than the regulation contemplated; as of March 2026, fewer than half of the notified-body designations needed for the August trigger have been completed. The harmonized standards process has produced some outputs (ISO/IEC 42001 has been recognized as partially relevant; CEN-CENELEC has published two standards on AI risk management and one on data quality), but the bulk of the work remains in committee.
Third, the operational practice that the high-risk regime requires — risk-management systems, data governance, technical documentation, record-keeping, transparency to deployers, human oversight, accuracy and robustness testing — is, for many of the deployers in scope, a substantial uplift from current practice. The general-purpose-model providers had, in most cases, already adopted analogues of the regime's requirements before the trigger date; many high-risk deployers have not. We expect the first six months of high-risk compliance to be considerably bumpier than the first six months of general-purpose compliance have been.
The international interaction
One of the questions we have been asked most often in the last eight months is whether the EU AI Act, in its operative form, has succeeded in influencing the international regulatory architecture or has been isolated by the divergence we described in our February 2025 viewpoint. The answer, eight months in, is mixed.
The Act has not produced the bandwagon effect that the GDPR produced in the data-protection space. Major non-EU jurisdictions — the United States, the United Kingdom, Japan, Singapore — have not adopted EU-style horizontal AI statutes, and most have explicitly chosen different architectures. The Trump administration's AI Action Plan, which we wrote about in September, is the paradigm of a non-EU approach. The U.K. Government's AI white paper, updated in early 2026, continues the sectoral-regulator approach the previous government adopted.
The Act has, however, produced substantial cross- jurisdictional alignment at the operational level. The training-data summary template's structure is being adopted, with modifications, by several non-EU jurisdictions as a non-binding industry standard. California's SB 53 Frontier AI Framework requirement is closely modeled on, though narrower than, the Article 55 systemic-risk regime. The AI Office's capability-evaluation methodology is being used, with attribution, by the U.K. AI Security Institute and by Singapore's IMDA. The international AI Safety Institute network, which we worried in February 2025 had been damaged by the political divergence, has continued its technical cooperation at the working level even where its political superstructure has been reconfigured.
The picture, in our reading, is one of regulatory pluralism with technical convergence. The institutional forms differ. The operational practices are increasingly similar. This is, on the whole, a more workable equilibrium than the alternatives — either a single global regime, which was never realistic, or a genuine technical divergence, which would have made multinational compliance impossible.
What we are advising for the next six months
First, for clients within the general-purpose regime who have not yet signed the Code of Practice: revisit the decision. The early-implementation pattern suggests that the costs of non-signature are higher than the costs of signing, and the political risk of being treated as a holdout — both regulatorily and reputationally — is now meaningful. Sign or commit, in writing, to a non-signature compliance program that is demonstrably at least as demanding as the Code.
Second, for all clients within scope: prepare for an escalation in the AI Office's enforcement posture in the second half of 2026. The informal-request posture of the first eight months is, in our view, the warm-up phase. The Office has been organizing its enforcement capacity, refining its substantive views, and identifying priority cases. The first formal proceedings under the regime are coming.
Third, for clients whose principal exposure is to the high-risk regime: accelerate the preparation. Four months is not enough time to build a conformity- assessment posture from scratch. Clients who have not begun the risk-management-system work, the technical- documentation work, and the deployer-disclosure work should begin now. Where the harmonized-standards framework is not yet complete, clients should be designing their compliance programs against the regulation's own text and adapting as standards are published.
Fourth, for all clients: invest in the operational infrastructure of compliance. The early-implementation lesson of the general-purpose regime is that the firms that have done well are those that built training-data tracking, evaluation, and incident-reporting capacities into the development process itself, rather than attempting to reconstruct compliance artifacts after the fact. The same lesson applies, even more strongly, to the high-risk regime. The deployers that will struggle most over the next year are those whose AI deployments have been built without thinking about the regulatory regime that now governs them. The deployers that will manage best are those whose deployments were designed with the regime in mind.
The EU AI Act is, eight months into its principal application phase, working roughly as its drafters intended. It is producing more transparency than the pre-regulatory baseline; it is forcing operational practice into shape; it is identifying problems — systemic-risk designation for sub-threshold models, the training-data summary's interaction with synthetic training data, the readiness gap among high-risk deployers — that the next round of implementation work will have to address. The regime is not perfect. It is, however, a working regulatory regime for a technology that, four years ago, did not have one. That is more than most observers expected when the trilogue closed. Whether it survives the political pressures of the coming years — including, prominently, the forthcoming review of the Act in 2028 that the regulation itself mandates — is a question we will return to in due course.