Summary:
Le projet de loi des sénateurs républicains inclut une clause controversée de moratoire sur les lois étatiques concernant l’intelligence artificielle (IA), réduisant la durée de 10 à 5 ans. Bien que cela favorise les entreprises, la nécessité d’un cadre réglementaire est soulignée, car les lois étatiques ont déjà été mises en place pour protéger les citoyens contre des abus tels que les deepfakes. De nombreux responsables politiques et groupes de défense des droits s’opposent à ce moratoire, soulignant les risques croissants associés à l’IA.
Original Link:
Original Article:
As Senate Republicans rush to pass their hodgepodge tax and spending package — the Big Beautiful Bill — controversy has arisen around an unusual provision: a moratorium on states passing their own laws regulating artificial intelligence.
On Monday, Texas Sen. Ted Cruz and Tennessee Sen. Marsha Blackburn introduced a new version of the provision for the bill that seeks to reduce the period of the moratorium from 10 years to five years. The provisions would also give states the ability to enact and enforce laws pertaining to child online safety and replicating a person’s “name, image, voice, or likeness” as long as it does not impose “undue or disproportionate burden” on AI systems.
“This pause on heavy-handed regulations can be a victory for American entrepreneurs, Little Tech, small businesses, and states like Texas,” Cruz said in an emailed statement to NBC News. “At the same time, it preserves the rights of states to protect consumers and content creators without giving the Left a backdoor to push their woke social agenda through AI regulation.”
The new provisions are not expected to act as a standalone amendment to the bill, but will most likely be included in a larger manager’s amendment that would include several changes to the bill.
States have already passed their own laws in the past that focus on preventing specific harms from AI technology, as Congress has been slow to pass any regulation on AI. State laws primarily targeted banning the use of deepfake technology to create nonconsensual pornography, to mislead voters about specific issues or candidates, or to mimic music artists’ voices without permission.
Some major companies that lead the U.S. AI industry have argued that a mix of state laws needlessly hamstrings the technology, especially as the U.S. seeks to compete with China. But a wide range of opposition — including some prominent Republican lawmakers, and civil rights groups — say states are a necessary bulwark against a dangerous technology that can cause unknown harms within the next decade.
The Trump administration has been clear that it wants to loosen the reins on AI’s expansion. During his first week in office, President Donald Trump signed an executive order to ease regulations on the technology and revoke “existing AI policies and directives that act as barriers to American AI innovation.”
And in February, Vice President JD Vance gave a speech at an AI summit in Paris that made clear that the Trump administration wanted to prioritize AI dominance over regulation.
But a Pew Research Center study in April found that far more Americans who are not AI experts are more concerned about the risks of AI than the potential benefits.
“Congress has just shown it can’t do a lot in this space,” Larry Norden, the vice president of the Elections and Government Program at the Brennan Center, a New York University-tied nonprofit that advocates for democratic issues, told NBC News.
“To take the step to say we are not doing anything, and we’re going to prevent the states from doing anything is, as far as I know, unprecedented. Especially given the stakes with this technology, it’s really dangerous,” Norden said.
The provision in the omnibus package was introduced by the Senate Commerce Committee, chaired by Cruz. Cruz’s office deferred comment to the committee, which has issued an explainer saying that, under the originally proposed rule, states that want a share of a substantial federal investment in AI must “pause any enforcement of any state restrictions, as specified, related to AI models, AI systems, or automated decision systems for 10 years.”
On Friday, the Senate Parliamentarian said that while some provisions in the One Big Beautiful Bill Act are subject to a 60-vote threshold to determine whether or not they can remain in the bill, the AI moratorium is not one of them. Senate Republicans said they are aiming to bring the bill to a vote on Saturday.
All Senate Democrats are expected to vote against the omnibus bill. But some Republicans have said they oppose the moratorium on states passing AI laws, including Sens. Josh Hawley of Missouri, Jerry Moran of Kansas and Ron Johnson of Wisconsin.
Georgia Rep. Marjorie Taylor Greene, a staunch Trump ally, posted on X earlier this month that, when she signed the House version of the bill, she didn’t realize it would keep states from creating their own AI laws.
“Full transparency, I did not know about this section,” Greene wrote. “We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states’ hands is potentially dangerous.”
Blackburn has previously said she opposes a 10-year moratorium.
“We cannot prohibit states across the country from protecting Americans, including the vibrant creative community in Tennessee, from the harms of AI,” she said in a statement provided to NBC News. “For decades, Congress has proven incapable of passing legislation to govern the virtual space and protect vulnerable individuals from being exploited by Big Tech.”
State lawmakers and attorneys general of both parties also oppose the AI provision. An open letter signed by 260 state legislators expressed their “strong opposition” to the moratorium. “Over the next decade, AI will raise some of the most important public policy questions of our time, and it is critical that state policymakers maintain the ability to respond,” the letter reads.
Similarly, 40 state attorneys general from both parties manifested their opposition to the provision in a letter to Congress. “The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI,” they wrote.
A Brennan Center analysis found that the moratorium would lead to 149 existing state laws being overturned.
“State regulators are trying to enforce the law to protect their citizens, and they have enacted common sense regulation that’s trying to protect the worst kinds of harms that are surfacing up to them from their constituents,” Sarah Meyers West, the co-executive director of the AI Now Institute, a nonprofit that seeks to shape AI to benefit the public, told NBC News.
AI and tech companies like Google and Microsoft have argued that the moratorium is necessary to keep the industry competitive with China.
“There’s growing recognition that the current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path,” OpenAI’s chief global affairs officer, Chris Lehane, wrote on LinkedIn. “While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward.”
“We cannot afford to wake up to a future where 50 different states have enacted 50 conflicting approaches to AI safety and security,” Fred Humphries, Microsoft’s corporate vice president of U.S. government affairs, said in an emailed statement.
The pro-business lobby Chamber of Commerce released a letter, signed by industry groups like the Independent Petroleum Association of America and the Meat Institute, in support of the moratorium.
“More than 1,000 AI-related bills have already been introduced at the state and local level this year. Without a federal moratorium there will be a growing patchwork of state and local laws that will significantly limit AI development and deployment,” they wrote.
In opposition, a diverse set of 60 civil rights organizations, ranging from the American Civil Liberties Union to digital rights groups to the NAACP, have signed their own open letter arguing for states to pass their own AI laws.
“The moratorium could inhibit state enforcement of civil rights laws that already prohibit algorithmic discrimination, impact consumer protection laws by limiting the ability of both consumers and state attorneys general to seek recourse against bad actors, and completely eliminate consumer privacy laws,” the letter reads.
The nonprofit National Center on Sexual Exploitation opposed the moratorium on Tuesday before the amended version was introduced, highlighting how AI has been used to sexually exploit minors.
AI technology is already being used to generate child sex abuse material and to groom and extort minors, said Haley McNamara, the group’s senior vice president of strategic initiatives and programs.
“The AI moratorium in the budget bill is a Trojan horse that will end state efforts to rein in sexual exploitation and other harms caused by artificial intelligence. This provision is extremely reckless, and if passed, will lead to further weaponization of AI for sexual exploitation,” McNamara said.