The Risk Pyramid Becomes Law: First Reflections on the EU AI Act
With the Council's formal adoption on 21 May 2024, what was for three years the most-debated regulatory text in the field of artificial intelligence has become binding European law. We discuss the obligations that arrive first, the ones that will be most disputed, and the implementation problem that no one in Brussels has yet solved.
The European Union's Artificial Intelligence Act, formally adopted by the Council on 21 May after the Parliament's approval in March, is now law. The regulation will be published in the Official Journal in the coming weeks and will enter into force twenty days thereafter, which on the current calendar places the trigger date in late July or early August. That is the moment from which the regulation's staggered obligations begin to run; it is also the moment from which the law will, for the first time, demand specific operating practices from a category of company — the developer of a general-purpose AI model — that did not, in a commercially meaningful sense, exist when the original proposal was tabled in April 2021.
Much has been written about the AI Act's basic architecture, and we will not repeat that exposition at length. It is, in its core conception, a product-safety statute extended to a novel category of products, structured around a risk pyramid: an outright ban on a small set of unacceptable-risk practices; a substantial compliance regime for high-risk systems used in specified domains; lighter transparency duties for limited-risk systems such as chatbots and deepfakes; and essentially no obligation for the long tail of minimal-risk applications. The novelty of the final text, by contrast with the 2021 proposal, lies in the chapter on general-purpose AI models, which was added during the Parliament's amendments in 2023 and finalized in the December trilogue. That chapter is where the regulation moves furthest beyond its product-safety lineage and into something closer to a horizontal regulation of an entire class of infrastructure technology.
The implementation calendar
The first thing to know about the AI Act is that its obligations do not all bind on the same date. The regulation enters into force twenty days after publication, but its substantive provisions phase in over a period of three years. Article 113 of the final text sets out the staircase in some detail; in summary, the prohibitions on unacceptable-risk practices become applicable six months after entry into force, the general-purpose AI model obligations apply twelve months after entry into force, the governance provisions and the bulk of the high-risk obligations apply twenty-four months after entry into force, and the most contested high-risk regime — covering AI systems embedded in regulated products such as medical devices, vehicles, and aviation equipment — applies thirty-six months after entry into force.
For a model developer reading this in June 2024, the most immediate planning anchor is February 2025, when the prohibitions begin to apply. Those prohibitions, set out in Article 5, are narrower than the headlines have suggested. Most concern law-enforcement and public-authority uses: social scoring by public authorities, untargeted scraping of facial images to build recognition databases, real-time remote biometric identification in public spaces (with specified law-enforcement exceptions), inference of sensitive characteristics from biometric data, and emotion recognition in workplaces and educational settings. The private-sector developer of a general-purpose model is unlikely to fall foul of these provisions directly, but should expect that deployers using its model for one of these prohibited applications will be expressly forbidden from doing so, and should consider the implications for its usage policies and acceptable-use enforcement.
The August 2025 trigger date — twelve months after entry into force — is the more consequential one for our clients. That is when the general-purpose AI model regime in Articles 53 and 55 becomes applicable. From that date, every provider placing a general-purpose AI model on the EU market must maintain technical documentation of the model and make certain information available to downstream deployers; must implement a policy to respect EU copyright law, including the text-and-data-mining opt-out in Article 4(3) of the 2019 DSM Directive; and must publish a sufficiently detailed summary of the content used for training the model. Models meeting the systemic-risk threshold — currently set at cumulative training compute greater than 10²⁵ FLOPs, but adjustable by the Commission and supplemented by a designation power — face additional duties: model evaluations including adversarial testing, systemic-risk assessment and mitigation, serious-incident reporting, and a duty to ensure adequate cybersecurity protection for the model and its physical infrastructure.
The AI Act's central design choice is that compliance with its general-purpose model regime is presumed, in the first instance, through adherence to a Code of Practice that does not yet exist.
That last point bears emphasis. The AI Office, established within the Commission's DG CNECT, is required under Article 56 to facilitate the drawing up of a Code of Practice for general-purpose AI providers. The Code is to be the operative compliance instrument for at least the first generation of obligations; providers who adhere to an approved Code will be presumed to be in compliance. The Code is expected to be drafted by the end of April 2025, leaving providers with roughly three months to align their existing systems to whatever the Code requires before the August 2025 obligations bite. That is an extraordinarily tight window for a regulatory instrument of this novelty, and it is in our view the dominant operational risk for general-purpose model providers between now and next summer.
The general-purpose model regime
The chapter of the AI Act devoted to general-purpose AI models is the one that has changed most between the 2021 proposal and the 2024 final text, and it is the chapter whose interpretation will determine whether the regulation proves workable. We single out four features for clients to track.
First, the regulation defines a general-purpose AI model in Article 3(63) as one that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. That definition is capacious. It plainly captures the current generation of large language models and multimodal foundation models. It will also capture a number of other models — protein structure predictors, certain large vision models, some large speech systems — about which the regulation's drafters appear to have given comparatively little thought. The Commission's recital language attempts to anchor the definition to models trained with large amounts of data using self-supervision at scale, but the operative text relies on the more abstract notion of generality. This is a definitional question that will be litigated.
Second, the threshold for systemic-risk designation is almost certainly a transitional fiction. The 10²⁵ FLOPs threshold was set with the current frontier in mind. It will, on present trends in training compute, capture the flagship models of the leading laboratories and a handful of others. It will not, however, age well: the threshold will either need to be raised regularly (with attendant political difficulty) or the locus of designation will shift to the Commission's discretionary power under Article 52, which permits the AI Office to designate a model as having systemic risk on the basis of qualitative considerations including capabilities and downstream reach. We expect the latter mechanism to become the operative one over the medium term.
Third, the training-data summary requirement is one of the most novel provisions of the regulation and one of the most under-specified. Article 53(1)(d) requires providers to draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. The recital language indicates that the summary should be generally comprehensive in its scope and list the main data collections or sets that went into training the model, while protecting trade secrets and confidential business information. How that balance will be struck in the template is a question we are working on actively, and we expect the template to be one of the earliest and most consequential outputs of the AI Office. Among other things, the template will determine the operating posture of rightholders and the practical enforceability of the DSM Directive opt-out.
Fourth, the open-source carve-out in Article 53(2) is narrower than its proponents in the legislative process claimed and broader than its critics fear. It applies to general-purpose AI models that are released under a free-and-open-source license that allows the access, use, modification, and distribution of the model, and whose parameters, including weights, the information on the model architecture, and the information on model usage, are made publicly available. Even where the carve-out applies, it does not exempt the provider from the copyright policy and training-data summary obligations, and it does not apply at all to systemic-risk models. The carve-out is therefore best understood as a relief from the documentation burden, not as a regulatory holiday.
What we are advising on now
For clients developing or deploying general-purpose models with significant EU activity, our immediate work falls into four buckets. The first is to ensure that the model development process is generating, in real time, the documentation that Article 53 will require — training-data inventories, evaluation reports, energy-consumption records, and model cards in a form that can be passed downstream without further redaction work. This is harder than it sounds; it requires changes to how laboratories organize their internal records, not merely the production of a compliance binder at the end of a training run.
The second bucket is engagement with the Code of Practice drafting process, which the AI Office is expected to launch in the autumn. Stakeholder selection, written submissions, and plenary participation will be the principal channels through which the operative text of the first compliance regime is shaped. We are advising clients to invest meaningfully in this process; the alternative is to receive a Code that has been drafted around the operating practices of one's competitors.
The third bucket is jurisdictional posture. The AI Act applies on the basis of placing on the market or putting into service in the Union, and Article 2 captures providers of general-purpose models established outside the Union whose models are placed on the EU market through downstream deployers. We are advising several non-EU developers on the appointment of authorized representatives and on the contractual structure of relationships with EU deployers, where the allocation of compliance responsibility between provider and deployer is rarely as clear in practice as the regulation makes it appear.
The fourth bucket is enforcement readiness. The AI Office will have direct supervisory and investigative powers in respect of general-purpose model providers, including the ability to require model evaluations and to impose administrative fines of up to three percent of total worldwide annual turnover or 15 million euro, whichever is higher, for breach of the general-purpose model obligations (with the highest fine band, seven percent or 35 million euro, reserved for the Article 5 prohibitions). Clients whose corporate practice has been built around GDPR enforcement will find that the AI Act's enforcement architecture is, in important respects, more centralized and more capable of fast action.
The first implementation question
We close with the observation that the central implementation question of the AI Act has not yet been asked publicly in plain terms. It is whether the Code-of-Practice mechanism will scale. The Commission has essentially delegated the operative compliance regime to a drafting process that must accommodate the legitimate interests of frontier laboratories, civil-society organizations, downstream deployers, and a heterogeneous group of European institutional stakeholders. That process must produce, in less than nine months, a text that regulators can use as a presumption-of-conformity instrument. There is no obvious precedent for an exercise of this scope being completed on this timeline.
The alternative, if the Code of Practice fails or is delayed, is the Commission's fallback power under Article 56(9) to adopt common rules by means of implementing acts. That fallback is in our view a less satisfactory regulatory outcome — the implementing-act process is slower, more formal, and less amenable to the technical refinement that a Code can offer — but it may be where the regime ends up if the Code negotiation does not converge. We will be tracking the AI Office's communications closely over the summer, and we expect to publish further analysis as the Code drafting begins.
For now, the operative advice is the prosaic one. The regulation is law. The trigger dates are not negotiable. The general-purpose model obligations apply in just over thirteen months. Clients who treat August 2025 as a problem to be addressed in the spring will not have enough time. Clients who begin the documentation, governance, and Code-engagement work now will have a meaningful advantage, both in compliance posture and in the ability to shape the operative text of the Code itself. The window for the latter is closing.