Europe’s AI Act Stumbles Out of the Gate
How do you reconcile 1,000 stakeholder views on a fast-moving technology, predicted to define the 21st century, in just two weeks? Europe’s AI Office, empowered to enforce the AI Act – the world’s first law governing artificial intelligence systems – is struggling to come up with answers.
As deadlines loom, the new legislation – which aims to set a global standard for trustworthy AI – is generating major conflicts and complaints. At stake is the legitimacy of the AI Act and the EU’s aspiration to be the “global leader in safe AI.”
According to the AI Act, providers of general-purpose AI systems, such as ChatGPT4, must implement a range of risk mitigation measures and ensure transparency and high-quality data sets. The AI Office is drafting a Code of Practice (“the Code”) which outlines practical guidelines for compliance. Since obligations for general-purpose AI providers come into force in August 2025, a finalized Code of Practice is due in April. It’s an all-too-short-timeline, stakeholders say.
The AI Office is consulting approximately 1,000 stakeholders to write the Code, including businesses, national authorities, academic researchers, and civil society. It published a first draft in mid-November 2024, giving stakeholders a mere 10 days to provide feedback. Hundreds of written responses poured in. A second draft – acknowledging the “short timeframe” – was presented on December 19, forcing stakeholders to send feedback over the holiday period.
“The shortcomings of the AI Act, particularly the overly tight timeline for applying its rules, are already becoming evident,” says Boniface de Champris, Senior Policy Manager at the Computer and Communications Industry Association (CCIA), an organization representing leading tech companies.
The Code’s third draft, expected on February 17, has to wrestle with deep divisions. A key area of contention concerns role of ‘external evaluators’ in the AI training process. Should AI developers have to open up their models to third-party assessors? Many academics and civil society organizations think so. Industry representatives fear this level of oversight is unjustified and technically infeasible.
Another sticking point involves training data and copyright law. The latest draft of the Code states that AI developers need to provide detailed information on their training data, including whether it was obtained lawfully. Companies fear these requirements will jeopardize trade secrets. Adding to the challenge, there is already contention over what lawfully obtaining data means in the EU. Lawyers disagree over whether the Text and Data Mining exceptions in the EU’s Copyright Directive – written before the rise of generative AI – allow commercial AI developers to scrape copyrighted data.
If the Code is not finalized by August, the AI office will be forced to set the rules by itself – a move that would damage the legitimacy of the AI Act. The drafting “process can either be an enormous success of participative, co-regulatory rule-setting, that could set an example for other fields, or it could fail, dealing a major blow to the overall credibility of the AI Act and, ultimately, of the European Union itself,” according to Laura Caroli, senior fellow at the Center for Strategic and International Studies.
Other parts of the AI Act face impending deadlines. A ban on AI practices deemed to pose “unacceptable risks” is applicable from February 2, 2025. This includes AI used for social scoring, emotional recognition in workplaces and education, and behavioral manipulation. Developers found to be non-compliant will be subjected to fines of up to 7% of annual turnover.
Crucial details remain unspecified over which systems fall under this banned category. The AI Office aimed to publish guidelines on prohibitions “in time for the entry into application of these provisions on February 2.” But with only days to go, no details have been released.
Both civil society and industry are concerned. “With the AI Act set to take effect in two weeks, business remain uncertain about critical issues,” the business group Digital Europe stated in mid-January. Similarly, an open letter signed by 21 civil society organizations – including Amnesty International, European Digital Rights, and Access Now – argued that the timelines for drafting the prohibitions were too short to “enable more targeted and useful feedback.”
The AI Office is currently “massively understaffed,” according to MEP Axel Voss, shadow rapporteur for the AI Act. Out of 85 staffers in total, only 30 work on implementing the AI Act. By contrast, the newly established UK AI Safety Institute employs over 150 staff.
It is not the first time the AI Act has faced backlash – not least from tech companies who claim that it threatens innovation. Earlier this autumn, Anthropic, Apple, and Meta were among the companies who refused to sign the EU’s voluntary AI Pact aimed at boosting early compliance with the AI Act. Donald Trump’s return to the White House has further emboldened US tech firms to adopt a defiant attitude towards EU legislation.
The Commission, in contrast, emphasizes that the AI Act will boost AI uptake and innovation in Europe. “You need the regulation to create trust, and that trust will stimulate innovation,” Lucilla Sioli, head of the AI Office, explains. The idea that the Act is killing innovation is an “absolute lie,” according to Carme Artigas, co-chair of the United Nations advisory board on AI, who led negotiations on the AI Act.
It always was an ambitious goal to regulate a rapidly developing, transformative technology. A transparent drafting process and clear guidelines are needed. Unless the EU succeeds, skepticism about AI regulation will mount. The days of Europe’s much-vaunted “Brussels effect” setting global rules for technology could be numbered.
Oona Lagercrantz is a Project Assistant with the Tech Policy Program at the Center for European Policy Analysis (CEPA) in Brussels. Before joining CEPA, Oona researched the ethics and governance of emerging technologies at the Centre for Climate Repair and the Cambridge Existential Risk Initiative.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
Meeting China’s Trade and Tech Challenge
How the US and Europe Can Come Together
The post Europe’s AI Act Stumbles Out of the Gate appeared first on CEPA.