Summary:
Le 22 mai 2025, la Chambre des représentants des États-Unis a adopté le ‘One Big Beautiful Bill Act’ (H.R. 1), imposant un moratoire de 10 ans sur la réglementation de l’IA par les États et les collectivités locales. Cette mesure vise à centraliser la supervision de l’IA au niveau fédéral, soulevant des inquiétudes sur la sécurité publique et l’impact sur les soins de santé, notamment concernant le pouvoir décisionnel autonome des systèmes d’IA. Les critiques craignent une perte de protection des patients et une responsabilité réduite face aux décisions assistées par IA.
Original Link:
Original Article:
On May 22, 2025, the U.S. House of Representatives passed the ‘One Big Beautiful Bill Act’ (H.R. 1) by a vote of 215-214, which includes a significant provision imposing a 10-year moratorium on state and local regulation of artificial intelligence (‘AI’) systems. This provision aims to centralize AI oversight at the federal level, effectively preempting a myriad of state laws and regulations concerning AI. While proponents argue this will foster innovation and prevent a fragmented regulatory landscape, critics raise concerns about its potential impact on healthcare, state sovereignty, and the legislative process itself.
The actual provision in Section 43201 of H.R. 1, titled ‘Artificial Intelligence and Information Technology Modernization Initiative,’ provides as follows:
_’. . . no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.’_
The following definitions, found in subpart (d) of Section 43201, apply to the proposed moratorium:
– **Artificial Intelligence:** The term ‘artificial intelligence’ has the meaning given such term in Section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. § 9401), which defines it broadly to include systems that perform tasks normally requiring human intelligence, including learning, reasoning, and self-correction.
– **Artificial Intelligence Model:** A software component of an information system that implements artificial intelligence technology and uses computational, statistical, or machine-learning techniques to produce outputs from a defined set of inputs.
– **Artificial Intelligence System:** Any data system, software, hardware, application, tool, or utility that operates, in whole or in part, using artificial intelligence.
– **Automated Decision System:** Any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output — such as a score, classification, or recommendation — to materially influence or replace human decision-making.
In its current state, the broad phrasing of the proposed moratorium would ostensibly apply not just to state governments but also to local municipalities, counties, and agencies — effectively preempting all forms of subnational regulation of AI.
Healthcare implications are significant, as several states have recently enacted or are considering legislation to regulate the use of AI in healthcare, particularly concerning insurance claim processes. Examples include California’s ‘Physicians Make Decisions Act’ (SB 1120), Connecticut’s proposed SB 817/HB 5590, Florida’s SB 794, Maryland’s SB0987, and Massachusetts’s proposed Bill S.46. These initiatives are aimed at ensuring that AI does not compromise patient care by making autonomous decisions without adequate human oversight. However, the federal moratorium could nullify these protections, leading to increased reliance on AI in critical healthcare decisions without sufficient checks and balances.
Critics argue that removing state oversight could lead to increased denials of claims by AI systems without human judgment, a lack of transparency for patients when AI is used in decision-making processes affecting their care, and reduced accountability for AI-driven decisions.
The proposed moratorium has sparked significant opposition from state officials and lawmakers, including a bipartisan group of 35 California lawmakers who urged Congress to reject the provision, citing concerns over public safety and state sovereignty.
In conclusion, the 10-year federal moratorium on state AI regulations presents a complex intersection of technological advancement, healthcare policy, and federalism. While it aims to create a unified national framework for AI oversight, it risks overriding state protections designed to safeguard patient care and autonomy.