Summary:
Pour éviter les obligations liées à l’IA à haut risque, les fournisseurs doivent justifier par écrit leur classification. Selon l’article 6(3) de l’AIA, une IA n’est pas considérée comme à haut risque si elle ne pose pas de danger significatif pour la santé ou les droits fondamentaux des personnes. Cela nécessite une évaluation minutieuse, surtout en évitant le profilage personnel, sous peine de sanctions. Une bonne préparation et des arguments convaincants sont essentiels avant la mise sur le marché d’un système IA.
Original Link:
Original Article:
🤖⚖️Want to avoid high-risk AI obligations? Sure, but you’ll need to justify that in writing, up front.
Qualifying as high-risk brings a heap of obligations for an AI system provider. Fortunately there’s an exception. Article 6(3) AIA states that if the system “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons” you are not high-risk after all.
When is that the case? The criterion is open-ended, meaning a provider may supply any sort of argument. There’s a couple of helpful hints:
🗺️”Not materially influencing the outcome of decision making.” Often risks to people are caused by automated decisions. If you can steer clear of that, your risk profile is lowered significantly.
📥”Performing a narrow procedural task”, such as structuring data or labeling incoming documents.
🧼”Improving human activity”, e.g. rewriting a draft decision into formal language or reducing its complexity (B1/B2 language level).
🚦”Detecting decision-making patterns or deviations therefrom”, because here the AI is merely flagging unusual situations for human review.
📬”Preparatory tasks”. A special case of the second one, the AI is merely helping the human get started.
There’s one pretty strong counter-indication:
🔖”Profiling of natural persons”. If the AI “evaluates certain personal aspects” of people, it is deemed too risky to be allowed under this exception. (This doesn’t mean that a profiling AI automatically is high-risk. You first have to qualify under an Annex III high-risk use case.)
📝But like I said, any argument will do if sufficiently convincing. What’s most important is that you prepare that argument in advance – prior to putting your AI system on the market or into service (article 6(4)). This is to avoid “flying under the radar”, just selling your AI and making up an excuse when caught.
This assessment goes into the yet-to-be-built EU database (article 49(2)), and must be made available to the market surveillance authority upon request. The authority may review and revert the assessment (article 80(1)).
💸A fine of 7,5 million Euro or 1% of worldwide group turnover can be imposed when the AI system was misclassified by the provider as not having high-risk (article 80(7) and 99(5)). And if course your AI must be recalled or withdrawn from the market unless you bring it into conformity immediately.
If that worries you: the fines can only be imposed if your intent was to “to circumvent the application of requirements” for high-risk AI, which I read as malicious intent. The AI Act uses “circumvent” exclusively to refer to bad faith actions. An honest assessment that’s merely held to be wrong by the market supervisor is not going to lead to fines.
The AI Act promises us Guidelines by 2 February 2026 with a comprehensive list of practical examples of AI systems that are high-risk and not high-risk. I can’t wait.