Poland’s AI Governance Model Raises Concerns Over DPA Exclusion

Poland’s approach to artificial intelligence (AI) governance has sparked considerable debate and scrutiny following its decision to set up an Artificial Intelligence Development and Security Commission. This new body will rely on existing regulators working collegially to ensure AI systems are secure, ethical, and effective. However, the omission of the Polish Data Protection Authority (DPA), Urząd Ochrony Danych Osobowych, from this framework has raised significant legal, ethical, and practical concerns.

Firstly, from a legal perspective, the exclusion of the DPA from this governance framework contradicts the fundamental principles established under the General Data Protection Regulation (GDPR). The GDPR clearly assigns exclusive competence on data protection matters to DPAs. Yet, the draft AI law published in Poland not only sidelines the Polish DPA but also fails to address how overlapping investigations that involve AI and personal data processing will be handled. This omission creates a potential legal vacuum that could lead to compliance issues for organizations trying to adhere to both AI regulations and the GDPR.

Ethically, the lack of direct involvement of the DPA raises questions about accountability and transparency in AI governance. Data privacy and protection are cornerstones of ethical technology deployment, particularly in AI systems that often collect, analyze, and process vast quantities of personal data. Without the DPA’s expertise and oversight, there is a risk that citizens’ data rights could be compromised, either through miscoordination or conflicting regulatory mandates.

From an industry standpoint, the decision to exclude the DPA may have far-reaching consequences. For example, companies operating in Poland may face dual compliance obligations that lack clarity, increasing operational burdens and costs. Consider a scenario where a Polish company develops an AI-powered facial recognition system. Under the GDPR, the company must obtain explicit consent for biometric data processing. However, without clear integration between the AI Commission and the DPA, the company may encounter ambiguous or even contradictory guidance on legality, potentially stifling innovation.

Moreover, the collegial structure of the new AI Commission itself raises governance questions. While the attempt to integrate existing regulators is commendable for avoiding duplication of efforts, the absence of a specialized data privacy watchdog not only undermines the purpose of avoiding regulatory fragmentation but creates the possibility of fragmented oversight. Ironically, this sidesteps the very cooperation needed to harmonize rules under the EU’s upcoming AI Act, which seeks to establish a uniform regulatory framework across member states.

As a way forward, Poland could reassess its draft AI law to grant the DPA either a permanent seat on the Commission or an independent advisory role. This would align with Article 51 of the GDPR, which mandates independent supervisory authorities, and ensure a comprehensive approach to AI regulation. Other countries should view this as a cautionary tale, underscoring the necessity for clear, inclusive frameworks that respect existing legal competences while fostering innovation in the AI sector.

Click to rate this post!
[Total: 0 Average: 0]

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply