Ireland’s Data Protection Commission reviews LinkedIn’s generative AI training project

Summary:

En mars 2025, la Commission de protection des données (DPC) d’Irlande a abordé le plan de LinkedIn de former des modèles d’IA générative propriétaires utilisant les données personnelles des membres de LinkedIn dans l’UE/EEE, suite à la notification de l’initiative devant débuter au début de novembre 2025. L’objectif est d’assurer l’utilisation responsable et légale des données personnelles pour la formation de modèles d’IA, tout en protégeant les droits des individus en vertu du Règlement général sur la protection des données (RGPD). Les points clés incluent l’adoption par LinkedIn de notifications de transparence améliorées, d’options de désinscription renforcées pour les utilisateurs, de limitations plus strictes sur la portée et la durée de l’utilisation des données, de garanties supplémentaires pour les mineurs et les informations sensibles, ainsi qu’une exigence de fournir une documentation détaillée sur le RGPD et un rapport d’efficacité de suivi dans les cinq mois suivant le début du traitement. La DPC continuera de surveiller activement la conformité de LinkedIn et le déploiement de ces changements après novembre 2025.

Original Link:

Link

Generated Article:

The Data Protection Commission (DPC) of Ireland has been actively engaging with leading companies developing artificial intelligence (AI) technologies to ensure compliance with data processing laws and ethical standards throughout the EU/EEA region. This is particularly pertinent as AI innovations, including generative models, often rely on vast amounts of personal data, raising significant concerns about privacy, consent, and ethics.

In March 2025, LinkedIn notified the DPC of its plan to train proprietary generative AI models utilizing personal data belonging to members situated in the EU/EEA, starting in November 2025. Upon thorough examination of LinkedIn’s data protection documents and comprehensive engagement with the company, the DPC flagged several risks and compliance issues associated with the proposed data processing. These included potential infringement on users’ rights under the EU General Data Protection Regulation (GDPR) and other legal and ethical challenges.

Under GDPR, measures such as transparency, consent, data minimization, and the proportionality of data use are fundamental principles to ensuring data protection. Article 5 of the GDPR, which emphasizes the accountability and fairness of data processing, appears to have underscored the DPC’s response to LinkedIn’s initial proposal. Importantly, the DPC stressed the need for LinkedIn to mitigate risks affecting individual privacy rights and introduced recommendations to resolve issues that could conflict with the legal protections enshrined in GDPR.

In response to the DPC’s concerns, LinkedIn revised its approach to data processing for AI training by implementing several safeguards:

1. Enhanced transparency measures, including detailed notices for users about the types of personal data to be used and enabling them to opt out of this data processing through accessible features in their account settings.
2. Limiting the scope of data usage, such as reducing both the categories of personal data processed and the timeframe from which data can be sourced.
3. Reinforcing protections for minors by ensuring data from users under 18 would not be included in AI model training.
4. Developing filters to exclude sensitive content, including information pertaining to trade union affiliations, from specific LinkedIn platforms.
5. Conducting rigorous compliance documentation, including Data Protection Impact Assessments (DPIAs), Legitimate Interest Assessments, and Compatibility Assessments, as mandated by GDPR.

Ethically, these measures indicate a shift toward respecting autonomy and ensuring trust by empowering users with meaningful choices regarding their data. However, critical questions remain about the extent to which users can truly understand and exercise these rights, especially given the complexity of AI processes like generative model training. Ensuring equitable access to privacy controls and understanding remains an ethical obligation for companies as substantial as LinkedIn.

From an industry perspective, the DPC’s involvement in LinkedIn’s AI processing plans may have broader implications. It sets a precedent for how AI companies should approach regulatory compliance and demonstrates the growing scrutiny of algorithm training using personal data. Since GDPR violations can result in significant financial penalties (up to €20 million or 4% of annual global turnover), proactive compliance and safeguarding measures help mitigate financial and reputational risks. This underscores the necessity of designing AI systems that balance innovation with user privacy protections.

Furthermore, the decision by the DPC not to label LinkedIn’s approach as fully compliant, but rather shifting monitoring requirements to ensure adherence over time, reflects the regulator’s commitment to ongoing scrutiny. Companies across the EU/EEA are likely to consider similar accountability mechanisms in developing AI, anticipating stricter audits and evaluations from supervisory authorities to prove GDPR compliance.

In practice, LinkedIn has been tasked with providing an updated evaluation within five months after initiating its processing to assess how effectively its protective measures are working. Additionally, users have been informed and empowered with tools, such as account toggles and dedicated objection forms, to prevent their data from being used in AI model development. It is crucial for individuals to take proactive actions by regularly reviewing their privacy settings and exercising rights provided under GDPR to protect their information.

As AI continues to evolve, regulatory frameworks like GDPR serve as critical anchors for protecting fundamental rights in the digital age. Nonetheless, challenges remain in ensuring responsibility within AI’s rapid development, particularly for global companies navigating the intricacies of compliance across jurisdictions. The DPC’s balanced focus on both innovation and data subject protections reaffirms its determination to act as a vigilant steward of user rights in the face of technological advancement.

Click to rate this post!
[Total: 0 Average: 0]

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply