Summary:
Les entreprises de capital-investissement en Europe devront améliorer leurs processus de diligence raisonnable en raison des nouvelles obligations imposées par la Loi sur l’Intelligence Artificielle de l’UE, qui entrera en vigueur en août. Cette législation inclut des exigences spécifiques sur les modèles d’IA à usage général et impose une approche basée sur le risque pour réguler l’IA. Les investisseurs devront s’assurer que les sociétés cibles disposent d’un inventaire clair de leurs modèles d’IA et qu’elles prennent des mesures pour se conformer à ces exigences.
Original Link:
Original Article:
Private equity firms investing in Europe will need to enhance their due diligence processes, Covington’s Lyndsey Laverack and Moritz Hüsch told PE Hub.
New obligations from the EU’s Artificial Intelligence Act on general-purpose AI (GPAI) models are scheduled to come into effect in August, with other obligations slated for the coming years.
With AI beginning to touch all forms of business, how should dealmakers factor these new regulations from the Act into their decision-making, as well as the other provisions that came into force last year?
To find out, PE Hub spoke to Lyndsey Laverack, a London-based partner in the global private equity practice, and Moritz Hüsch, a Frankfurt-based partner and co-chair of the technology industry group and the AI and Internet of Things practice groups, at law firm Covington.
#### What are the biggest potential effects from the Act on M&A and how should dealmakers adapt in the face of them?
**Lyndsey Laverack:** The AI Act introduces significant obligations for organizations providing (i.e., developing and placing on the market/putting into service in the EU) and deploying certain AI systems and providing GPAI models. In the M&A context, dealmakers will need to enhance their due diligence processes to ensure the target has a clear inventory of its GPAI models and AI systems, understands how they’re used, and is taking steps toward compliance. This will determine the contractual protection required in the purchase agreement and will form the basis of the target’s ongoing compliance post-closing.
#### What is the approach of the AI Act to regulating artificial intelligence?
**Moritz Hüsch:** In the AI Act, the lawmakers have taken a risk-based approach to regulating AI. The higher the risk posed by an AI system, the stricter the obligations under the AI Act – with some AI practices being prohibited altogether, e.g., the use of an AI system that creates facial recognition databases through the untargeted scraping of facial images from the internet.
Other AI systems qualify as high-risk (e.g., where the AI system is a safety component in a medical device), and the AI Act imposes material obligations regarding high-risk AI systems on developers (‘providers’) and users (‘deployers’), among others.
For certain other AI systems – in particular for AI systems that interact with natural persons (e.g., a chatbot) – certain transparency obligations apply under the AI Act.
Low-risk AI systems that do not fall in one of these categories are not regulated. Finally, the AI Act contains obligations regarding GPAI models – while such models do not qualify as AI systems, they can be used as a component in an AI system (e.g., a large language model that will be integrated into a chatbot). In addition to systems and models, the AI Act imposes different sets of obligations on different actors, such as providers (developers) and users (deployers).
#### How ubiquitous have existing or potential AI features become in sales decks? Where should a review of AI usage/capabilities fit in the due diligence process?
**Lyndsey Laverack:** In our experience, not prevalent at the moment. We are seeing buyers take the lead on the diligence rather than sellers placing it front and center in sales materials. While every company’s use of AI is different, it’s critical that a thorough review and inventory of AI capabilities takes place early in the due diligence process. Understanding exactly what AI tools and features are in use (and how they are monitored and, if applicable, developed) is essential, as the nature of these technologies will influence whether they fall within the scope of applicable legislation.
#### The EU AI Act can apply to entities based outside the EU. Under what circumstances? How should/are companies with AI capabilities/features from outside the EU approach doing business in the bloc?
**Moritz Hüsch:** The Act applies to “providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union.” An example provided in Recital 22 explains that the Act would apply when an EU-based company hires a company outside the EU to carry out a task using a high-risk AI system; even if the AI system is not physically or directly used in the EU, the Act can still apply if the output of the AI system will be used by the EU-based company.
In addition, the Act applies to providers outside the EU that place AI systems or GPAI models on the market in the EU. However, it is less clear whether and how the Act applies to companies outside the EU that sell or license AI systems or models to downstream actors outside the EU who later place those systems or models on the EU market. Companies developing AI systems and models outside the EU should stay on top of regulatory guidance touching on this issue and should be thoughtful how they market their systems and models to downstream actors.
#### The territorial scope includes the output of AI systems used in the EU. This seems very broad. How will authorities identify output that has originally come from AI systems?
**Moritz Hüsch:** The identification of the output will be relatively easy where the output is visible and watermarked – e.g., where the AI system generates synthetic image or video, the provider must ensure that the output is marked in a machine-readable format and detectable as artificially generated or manipulated. However, the identification can be difficult where providers do not comply with this obligation or where the output of an AI system is invisible (e.g., in the case where the AI system is a safety component and controls functionalities of a product). In the latter case (invisible output), authorities may potentially identify the AI system by other means (such as the disclosure of an AI system during a conformity assessment).
#### Maximum penalties for non-compliance are fines of up to €35 million or 7 percent of worldwide annual turnover, whichever is higher. That looks more onerous than even General Data Protection Regulation (GDPR) penalties. Is there any sense of which levels of non-compliance incur which penalties?
**Moritz Hüsch:** Like the GDPR, the AI Act sets out a list of factors to consider when deciding whether to impose an administrative fine – and the amount of the administrative fine – for an infringement of the AI Act. In practice, authorities will make their decision based on the particulars of an individual case, which makes it hard to predict. On the basis of our experience with the GDPR, we expect that authorities will not impose an administrative fine in each case of an infringement, but only in the cases of more severe infringements.
#### Do you have any sense that the Act might be watered down, at least in application, particularly as Europe seeks to improve its competitiveness?
**Moritz Hüsch:** There has been some reporting about the Polish Council Presidency calling for an expansion of exemptions for SMEs. EU lawmakers are also considering a pause in the enforcement of the AI Act.
#### Can PE sponsors be responsible for any breach of the Act by their portcos? How does that compare with other wide-ranging EU Acts like GDPR?
**Lyndsey Laverack:** It may well be that regulators and courts will apply the same criteria that they apply to determine parental liability for infringements of EU competition rules – the European Data Protection Board applies the same approach to infringements of the GDPR. If that will be the case, regulators and courts will analyze whether the PE investor had ‘decisive influence’ over the portco, which will very likely be the case where the PE investor holds 100 percent or near to 100 percent of the shares. But even where the PE investor holds fewer shares, a regulator or court could hold the PE investor liable if it can prove that it had decisive influence.
#### How should sponsors be thinking about the Act in terms of their own use of AI? What internal uses of AI by GPs might fall under the scope of the Act?
**Moritz Hüsch:** One approach for sponsors could be to analyze AI systems and models available on the market in a first step in order to determine whether these systems and models can be helpful (e.g., for the acceleration of decision-making processes and the improvement of data analytics). In a second step, the sponsor could analyze whether the AI systems and models selected fall into the scope of the AI Act. Where that is the case, it should analyze in a third step whether the sponsor is (only) a deployer or (also) a provider (and/or another actor that is regulated in the AI Act). The fourth and fifth steps would cover the distillation of the applicable requirements (on the basis of the analysis performed in steps two and three) and the implementation of these requirements (which will likely include a specific policy on AI/regulated technology). In this context, it is also important that this is not a one-time approach but a continuous process.
#### Are there any other hard-to-spot pitfalls that GPs should avoid in the Act?
**Lyndsey Laverack:** The bulk of the obligations in the AI Act are applicable to providers and deployers of high-risk AI systems, and providers of GPAI models. These obligations can be onerous, and companies should begin thinking about them prior to the date when obligations begin to apply. It is important that the people conducting due diligence understand the questions to ask and, importantly, how to interpret the responses in line with the relevant regulatory requirements (of which the EU AI Act is only one component – the regulatory framework is complex).
Companies should also keep in mind that AI systems and models are not necessarily static products; they can be changed or modified by downstream actors. The AI Act imposes obligations on downstream actors in certain scenarios where they modify systems or models.