
The European Union’s Risk-Based Rulebook on Artificial Intelligence (EU AI Law) has been several years in the making. But we expect to hear more about the regulations in the coming months (and years) as key compliance deadlines begin. Meanwhile, read on for an overview of the law and its purpose.
So what is the EU trying to achieve? Let’s turn back time to April 2021, when the commission published its original proposal and lawmakers were enacting it into law to strengthen the bloc’s AI innovation capabilities by fostering trust among citizens. The EU proposed that the framework would ensure AI technology remains “human-centric” while providing clear rules for businesses to work their machine learning magic.
The growing adoption of automation across industries and society certainly has the potential to maximize productivity in a variety of areas. However, there is also a risk that the damage can escalate quickly if the results are poor or if AI intersects with and fails to respect individual rights.
The bloc’s goal for the AI Act is therefore to promote the uptake of AI and grow the local AI ecosystem by setting conditions to reduce the risk of things going seriously wrong. Lawmakers believe that establishing guardrails will increase citizens’ trust and use of AI.
The idea of fostering an ecosystem through trust was fairly uncontroversial in the early 2000s when the law was being discussed and drafted. However, some have raised objections that it is too early to regulate AI and that it could reduce innovation and competitiveness in Europe.
Of course, few would say it’s too soon, considering how the technology has exploded into mainstream consciousness thanks to the boom in generative AI tools. However, despite the inclusion of support measures such as regulatory sandboxes, there are still objections that the law sandbags the prospects of domestic AI entrepreneurs.
Nevertheless, there is now a big debate among many lawmakers. how To regulate AI, the EU has set a direction through AI law. The next year is all about putting your plans into action.
What is required under the AI Act?
Most uses of AI include: ~ no It falls outside the scope of risk-based rules and is therefore regulated under the AI Act. (It’s also worth noting that military use of AI is completely out of scope, as national security is a legal prerogative of member states, not EU level.)
When it comes to using AI within its scope, the Act’s risk-based approach ensures that a small number of potential use cases (such as “harmful subliminal, manipulative and deceptive techniques” or “unacceptable social scores”) are considered “unacceptable.” Establish a hierarchy that frames things as “not there.” It is “dangerous” and therefore prohibited. However, the list of prohibited uses includes exceptions. This means that even the law’s few prohibitions come with many caveats.
For example, banning law enforcement from using real-time remote biometrics in publicly accessible spaces is not an outright ban, except that some lawmakers and many civil society groups allow its use for specific crimes.
The next level of unacceptable risk/prohibited uses are “high-risk” use cases, such as AI apps used on critical infrastructure. law enforcement; education and vocational training; health care; And more – app creators must perform conformance assessments prior to market deployment and on an ongoing basis (e.g. when substantially updating the model).
This means that developers must be able to demonstrate that they are meeting legal requirements in areas such as data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness. You must have quality and risk management systems in place to help you demonstrate compliance when enforcement agencies come for an audit.
High-risk systems deployed by public authorities must also be registered in public EU databases.
There is also a third “medium risk” category that applies transparency obligations to AI systems, such as chatbots or other tools that can be used to create synthetic media. The concern here is that it could be used to manipulate people. Therefore, these types of technologies require users to be informed that they are interacting with or viewing AI-generated content.
All other uses of AI are automatically considered low or minimal risk and unregulated. This means that there are no obligations under these rules for things like using AI to segment and recommend social media content or target advertising, for example. However, the block encourages all AI developers to voluntarily follow best practices to increase user trust.
This set of tiered, risk-based rules makes up the bulk of AI law. However, the AI Act also has some dedicated requirements for the multifaceted models that underpin generative AI technologies, calling them “general purpose AI” models (or GPAI).
This subset of AI technologies, sometimes called “base models” in the industry, typically sit upstream of many apps that implement artificial intelligence. Developers are leveraging GPAI’s APIs to deploy the functionality of these models into their own software, often fine-tuned to specific use cases to add value. This means that GPAI has quickly taken a strong position in the market, with the potential to impact AI outcomes at scale.
GenAI has entered the chat…
The rise of GenAI has reshaped more than just the conversation about AI law in the EU. As the bloc’s lengthy legislative process coincided with the hype around GenAI tools like ChatGPT, the rulebook itself changed. European lawmakers took the opportunity to respond.
MEPs proposed adding additional rules for the models on which the GPAI, or GenAI, tool is based. This ultimately sharpened the tech industry’s interest in what the EU was doing with the law and led to intense lobbying for a GPAI split.
French AI company Mistral has been the most vocal, arguing that regulations on model makers would hinder Europe’s ability to compete with AI giants from the US and China. OpenAI’s Sam Altman also told reporters that he might withdraw his company’s technology from Europe if the laws proved too burdensome, but after the EU called him, he rushed to put traditional physical pressure on local power brokers. I went back to the lobby. About this clumsy threat.
For Altman, taking a crash course in European diplomacy was one of the most notable side effects of the AI law.
The result of all this noise was a reckless attempt to conclude the legislative process. Last year it took months and a marathon final negotiating session to get the file across the European Parliament, Council and Commission. A political agreement could be signed in December 2023, paving the way for the final text to be adopted in May 2024.
The EU has promoted its AI law as a “global priority.” However, being first in this cutting-edge technological environment means there are still many details to be worked out, including setting specific standards under which the law will apply and drawing up detailed compliance guidelines (codes of practice) for oversight and supervision. The law envisages an ecosystem-building framework to operate.
Therefore, as far as assessing its success, the Act remains and will continue to be a work in progress.
For GPAI, the AI Act continues the risk-based approach with (only) lighter requirements for most models.
For commercial GPAI, this means transparency rules (including technical documentation requirements and disclosure about the use of copyrighted material used to train models). These provisions are intended to help downstream developers comply with AI laws.
There is also a second layer to the most powerful and potentially dangerous GPAI. It strengthens the obligations of modellers by requiring prior risk assessment and risk mitigation for GPAIs with “systemic risk.”
Here, the EU is concerned about the risk of technology manufacturers losing control over the continued development of self-improving AI, for example, over very powerful AI models that could pose risks to human life.
Lawmakers decided to use computational thresholds for model training as classifiers for this systematic risk hierarchy. GPAI falls into this category based on the cumulative amount of compute used in training, measured in floating point operations (FLOPs) greater than 10.25.
There are no models included in the scope so far, but this may change as GenAI continues to be developed.
There is also room for AI safety experts involved in AI law oversight to express concerns about systemic risks that may arise elsewhere. (For more details on the governance structure the bloc has designed for the AI law, including the various roles of the AI Secretariat, see our previous report.)
Lobbying by Mistral and others has resulted in the rules being relaxed for GPAI, for example relaxing requirements for open source providers (lucky Mistral!). R&D was also split. This means that GPAI, which has not yet been commercialized, is completely outside the scope of the law, without even being subject to transparency requirements.
The Long Road to Compliance
The AI Act officially came into effect across the EU on August 1, 2024. Those dates essentially fired the starting gun, as various component compliance deadlines are set at different intervals from early next year to mid-2027.
Some of the key compliance deadlines are six months after effective date, when the rules for prohibited use cases begin. 9 months since the Code of Practice came into effect; Please allow 12 months for transparency and governance requirements. 24 months for other AI requirements, including obligations for some high-risk systems, and 36 months for other high-risk systems.
One of the reasons for this staggered approach to the legal provisions is to provide sufficient time for businesses to return to normal operations. But more importantly, regulators need time to figure out what compliance looks like in a cutting-edge environment.
As of this writing, Block is busy drafting guidance on various aspects of the law ahead of deadlines, such as a code of practice for GPAI creators. The EU is also consulting on the law’s definition of an “AI system” (i.e. which software will be in scope or excluded) and clarifications regarding prohibited uses of AI.
The full picture of what the AI Act will mean for companies in its scope is still unclear and taking shape. However, key details are expected to be revealed in the coming months and in the first half of next year.
One more thing to consider: Given the pace of development in the field of AI, what is needed to comply with the law will likely continue to change as these technologies (and their associated risks) continue to advance. So this is a rulebook that should remain a living document.
AI Rule Enforcement
Supervision of GPAI is centralized at EU level, with the AI Office playing a key role. The penalties the commission could reach to enforce these rules could amount to up to 3% of a modeler’s global revenue.
Elsewhere, enforcement of legal rules for AI systems is decentralized. This means that it will be up to member state-level authorities (plural, as more than one supervisory authority may be appointed) to assess and investigate compliance issues for most AI apps. . It is not yet known how viable this structure will be.
Violations of prohibited uses can result in fines of up to 7% of global sales (or €35 million, whichever is greater) on paper. Violations of other AI obligations may be subject to fines of up to 3% of global turnover, or up to 1.5% for providing incorrect information to regulators. So there is a differential in the scale that sanctions enforcement authorities can reach.