Summary:
Les règles sur l’intelligence artificielle de l’Union européenne, adoptées en 2023, sont déjà sujettes à des modifications en raison des pressions exercées par l’industrie technologique. La Commission européenne envisage de simplifier les obligations réglementaires pour favoriser les investissements dans l’IA, tout en maintenant les objectifs principaux de l’AI Act. Cependant, des groupes de la société civile et des législateurs européens soulignent que cette simplification ne doit pas compromettre l’efficacité des réglementations, appelant à un équilibre entre simplification et protection.
Original Link:
Original Article:
BRUSSELS — The European Union’s landmark artificial intelligence rules are barely a year old but Brussels is already signaling it’s open to tweaking them amid pressure from industry.
When legislators reached a deal on the world-first law to tackle artificial intelligence risks in late 2023, European Commission President Ursula von der Leyen praised it as “a historic moment” — reflecting the strong political focus on keeping the nascent technology in check.
Much has changed. The new U.S. administration has urged Europe to go easy on regulating AI as Washington sends shockwaves through the economy with massive and unpredictable tariffs. Von der Leyen, meanwhile, has now embraced AI as a way to restore Europe’s competitiveness and also its independence as the continent ramps up efforts to compete with the U.S. and beyond.
On Wednesday, the Commission took the latest step in a weeks-long effort to woo the tech industry with a new strategy promoting AI rules that are easier to obey.
“When we want to boost investments in AI, we have to make sure that we have an environment that is faster and simpler than the European Union is right now,” the Commission’s tech chief Henna Virkkunen told European Parliament lawmakers after unveiling the strategy.
Brussels didn’t go all in to praise and defend its rulebooks. The focus was instead on preventing the EU’s AI regulations from turning into another burden for companies — a narrative pushed by Big Tech lobby groups and industry frontrunners as a major concern as they figure out how to implement the laws.
Speaking to reporters, Virkkunen “committed” to the AI Act’s main goals but said the Commission is looking into the “administrative burden” and considering “some reporting obligations [that] we could cut.”
The Commission will seek industry views “where regulatory uncertainty is hindering the development and the adoption of AI,” and feed that into a wider effort to review and possibly roll back a swathe of digital rulebooks at the end of this year.
“Nothing is excluded,” said a senior Commission official on Wednesday when asked about the scope of the review during a briefing for reporters.
Confirmation that amending the AI law is not off the table is the latest appeasement after the Commission in February moved to ax plans to set strict liability rules for harm caused by AI.
‘Only the first step’
Brussels’ leading U.S. Big Tech lobby group, CCIA, acknowledged Wednesday’s strategy as a first move to simplify tech rules — and called for more ambition.
“This is only the first step. What matters most is tackling them head-on,” said Boniface de Champris, senior policy manager at CCIA Europe.
It adds wind to the sails of a sustained industry lobbying effort that has been playing out alongside the rapid two-month shift in transatlantic relations.
Leading AI company OpenAI sent its top lobbyist Chris Lehane to Brussels on the occasion of the unveiling of the plan. He had called for “simple and predictable rules” in an interview with POLITICO ahead of the announcement.
John Collison, co-founder of Irish-U.S. payment handler Stripe, criticized the AI Act on Tuesday in an interview with POLITICO, saying it was an example of “a priori regulation of speculative harms.”
“The AI industry is very nascent and we would probably be able to make better choices if we waited five years,” he said.
Another battleground is a voluntary set of rules for general-purpose AI model providers, which are the most complex AI models such as OpenAI’s GPT and Google’s Gemini, where AI companies have sounded the alarm bell on the rules turning into another burden.
But civil society groups are pushing back against deregulation, and that’s likely to make discussions on any changes to AI laws as fraught as the first time around.
The language in Wednesday’s strategy was debated right up to publication: The part on simplification in the final version was heavily softened compared to a leaked draft reported by POLITICO ahead of the announcement. A line that said there was an “opportunity to minimize the potential compliance burden” was replaced by a statement that there’s a “need to facilitate compliance with the AI Act.”
Two leading EU lawmakers also pushed back on Wednesday against the Commission’s earlier move to ax a proposal for a single EU liability scheme for harm caused by AI. They slammed the Commission’s reasoning as “premature and unconvincing.”
Maximilian Gahntz, AI policy lead at Mozilla, warned that a push for simple AI rules “should not lead to undermining the effectiveness of the EU’s AI Act rules and what they were meant to accomplish.”
“Simplification should not mean deregulation,” he said.