Italy Becomes First EU Country to Align National AI Law with EU AI Act

Summary:

Le 17 septembre, le parlement italien a approuvé une loi sur l’IA complète, faisant de l’Italie le premier pays de l’UE à aligner la réglementation nationale sur l’IA avec l’Acte sur l’IA de l’UE. Cette législation vise à garantir que l’IA est utilisée de manière centrée sur l’humain, sûre et transparente tout en favorisant l’innovation et en protégeant la cybersécurité et la vie privée. La loi établit des lignes directrices intersectorielles, impose une surveillance humaine, réglemente l’utilisation de l’IA dans des domaines tels que la santé et l’emploi, fixe de nouvelles sanctions pour une utilisation illégale de l’IA et alloue jusqu’à 1 milliard d’euros pour des investissements liés à l’IA. Les développements futurs incluent des rôles de surveillance pour l’Agence du numérique italien et l’Agence nationale de la cybersécurité pour surveiller la conformité à l’IA et soutenir l’innovation en matière d’IA.

Original Link:

Link

Generated Article:

On September 17th, Italy became the first European Union (EU) country to adopt comprehensive artificial intelligence (AI) legislation that aligns closely with the EU’s proposed AI Act. Prime Minister Giorgia Meloni’s administration has positioned the law as a landmark move to embrace technological progress while safeguarding public interest.

At the core of the new law are principles of human-centric development, transparency, safety, innovation, cybersecurity, and privacy protection. The legal framework spans multiple sectors, including healthcare, education, public administration, justice, and industry. A key feature is the mandatory traceability of AI-driven decisions and the requirement for human oversight where algorithmic systems are implemented. These provisions are consistent with the draft text of the EU AI Act, which emphasizes accountability for high-risk AI applications and aligns with international standards like the OECD AI Principles.

One unique provision is the restriction on AI for minors under 14, requiring parental consent for access. This regulation mirrors the principles of data responsibility under the EU’s General Data Protection Regulation (GDPR), ensuring that the digital rights of minors are protected—a pressing ethical concern in an increasingly AI-driven digital economy.

The legislation also addresses the criminal use of AI technologies. For instance, creating or distributing harmful AI-generated content such as deepfakes is punishable by one to five years in prison. Whether these penalties are adequate or enforceable, particularly for transnational activities, remains a question. Furthermore, the law strengthens the penalties for conventional crimes, such as identity theft and fraud, when AI is unlawfully employed. For example, if an AI tool enables financial manipulation or identity spoofing for malicious purposes, offenders would face stiffer consequences, promoting public trust in digital and AI services.

On intellectual property, the law clarifies that works co-created with AI can qualify for protection if they involve intellectual effort by a human author. Concurrently, text and data mining by AI systems is limited to non-copyrighted materials or licensed utilization by authorized entities, harmonizing with Article 3 of the EU Directive on Copyright in the Digital Single Market.

In healthcare, AI systems can support diagnosis and treatment decisions but must operate under strict guidelines. Physicians retain ultimate authority, and patients are entitled to full disclosure about AI’s role in their care. This provision exemplifies balancing AI’s potential with the ethical imperatives of patient autonomy and informed consent.

Regarding employment, the law mandates that employers disclose to workers when AI systems are being utilized in management or evaluation processes. This requirement aligns with global calls for algorithmic transparency in workplace environments, ensuring fair treatment and preventing biases—a known risk in AI-driven recruitment and oversight.

To implement and oversee the law, the Italian government has designated the Agency for Digital Italy and the National Cybersecurity Agency as primary regulatory authorities. Existing bodies, such as Consob and the Bank of Italy, will also play roles in specific areas of enforcement, providing a distributed yet coordinated regulatory framework.

On the economic front, the Italian state has committed up to €1 billion in funding for AI-related startups and initiatives focused on cybersecurity and quantum computing. While ambitious, critics argue that this allocation pales compared to international investments like those in the United States and China, which are spending billions annually. For example, China’s AI industry investments surpassed $17 billion in 2022, raising concerns about Italy’s ability to compete globally.

In conclusion, the legislation represents a significant step toward responsibly harnessing AI technologies while minimizing risks to society. It also sets a precedent for other EU nations while tackling complex ethical questions. Whether the regulatory framework delivers on its promises will depend not only on diligent enforcement but also on its adaptability to the rapidly evolving technological landscape.

Click to rate this post!
[Total: 0 Average: 0]

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply