Summary:
California’s SB 53 aims to regulate AI safety while various states propose bills addressing consumer protection, algorithmic discrimination, and sector-specific regulations. Federal responses include bipartisan efforts to restrict Chinese AI firms like DeepSeek over national security concerns.
Original Link:
Original Article:
On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law. SB 53 proposes a significantly narrower approach compared to
Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges
State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025. As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation. Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.
Consumer Protection. Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act. In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general. They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system. For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
Sector-Specific Automated Decision-making. Lawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance. For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance. Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General. Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT. For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.
Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips. Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January. DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.
The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials. These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers. What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security.
Federal Government-Wide Responses
Bi-Partisan Bill to Ban DeepSeek from Government Devices: On February 7, Representatives Gottheimer (D-NJ-5) and LaHood (R-IL-16) introduced the No DeepSeek on Government Devices Act (HR 1121). Reps. Gottheimer and LaHood, who both serve on the House Permanent Select Committee on Intelligence, each issued public statements pointing to grave and deeply held national security concerns regarding DeepSeek. Rep. Gottheimer has stated that “we have deeply disturbing evidence that [the Chinese Communist Party (“CCP”) is] using DeepSeek to steal the sensitive data of U.S. citizens,” calling DeepSeek “a five-alarm national security fire.” Representative LaHood stated that “[u]nder no circumstances can we allow a CCP company to obtain sensitive government or personal data.”
While the details of the bill have not yet been unveiled, any future DeepSeek prohibition could be extended by the FAR Council to all federal contractors and may not exempt commercial item contracts under FAR Part 12 or contracts below the simplified acquisition (or even the micro-purchase) threshold, similar to other bans in this sector. Notably, such a prohibition may leave contractors with questions about the expected scope of implementation, including the particular devices that are covered.
This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration. This blog describes key actions on AI taken by the Trump Administration in January 2025.
Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure
Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration. On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.” This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land. Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI.
On January 14, and in tandem with the release of EO 14141, the Office of Management and Budget (“OMB”) issued Memorandum M-25-03 on “Implementation Guidance for the Federal Data Center Enhancement Act,” directing federal agencies to implement requirements related to the operation of data centers by federal agencies or government contractors. Specifically, the memorandum requires federal agencies to regularly monitor and optimize data center electrical consumption, including through the use of automated tools, and to arrange for assessments by certified specialists of data center energy and water usage and efficiency, among other requirements. Like EO 14141, Memorandum M-25-03 has yet to be rescinded by the Trump Administration.
Trump White House Revokes President Biden’s 2023 AI Executive Order
On January 20, President Trump issued Executive Order 14148 on “Initial Recissions of Harmful Executive Orders and Actions,” revoking dozens of Biden Administration executive actions, including the October 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” (“2023 AI EO”). To implement these revocations, Section 3 of EO 14148 directs the White House Domestic Policy Council (“DPC”) and National Economic Council (“NEC”) to “review all Federal Government actions” taken pursuant to the revoked executive orders and “take all necessary steps to rescind, replace, or amend such actions as appropriate.” EO 14148 further directs the DPC and NEC to submit, within 45 days of the EO, lists of additional Biden Administration orders, memoranda, and proclamations that should be rescinded and “replacement orders, memoranda, or proclamations” to “increase American prosperity.” Finally, EO 14148 directs National Security Advisor Michael Waltz to initiate a “complete and thorough review” of all National Security Memoranda (“NSMs”) issued by the Biden Administration and recommend NSMs for recission within 45 days of the EO.
On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.” The RFI marks a first step toward the implementation of the Trump Administration’s January
On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence. The new legislation comes just days after Chinese AI company DeepSeek
On January 14, 2025, the Biden Administration issued an Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure” (the “EO”), with the goals of preserving U.S. economic competitiveness and access to powerful AI models, preventing U.S. dependence on foreign infrastructure, and promoting U.S. clean energy production to power the development and operation of AI. Pursuant to these goals, the EO outlines criteria and timeframes for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy resources, by private-sector entities on federal land. The EO builds upon a series of actions on AI issued by the Biden Administration, including the October 2023 Executive Order on Safe, Secure, and Trustworthy AI and an October 2024 AI National Security Memorandum.
I. Federal Sites for AI Data Centers & Clean Energy Facilities
The EO contains various requirements for soliciting and leasing federal sites for AI infrastructure, including:
The EO directs the Departments of Defense (“DOD”) and Energy (“DOE”) to each identify and lease, by the end of 2027, at least three federal sites to private-sector entities for the construction and operation of “frontier AI data centers” and “clean energy facilities” to power them (“frontier AI infrastructure”). Additionally, the EO directs the Department of the Interior (“DOI”) to identify (1) federal sites suitable for additional private-sector clean energy facilities as components of frontier AI infrastructure, and (2) at least five “Priority Geothermal Zones” suitable for geothermal power generation. Finally, the EO directs the DOD and DOE to publish a joint list of ten high-priority federal sites that are most conducive for nuclear power capacities that can be readily available to serve AI data centers by December 31, 2035.
Public Solicitations. By March 31, 2025, the DOD and DOE must launch competitive, 30-day public solicitations for private-sector proposals to lease federal land for frontier AI infrastructure construction. In addition to identifying proposed sides for AI infrastructure construction, solicitations will require applicants to submit detailed plans regarding:
Timelines, financing methods, and technical construction plans for the site;
Proposed frontier AI training work to occur on the site once operational;
Use of high labor and construction standards at the site; and
Proposed lab-security measures, including personnel and material access requirements, associated with the operation of frontier AI infrastructure.
The DOD and DOE must select winning proposals by June 30, 2025, taking into account effects on competition in the broader AI ecosystem and other selection criteria, including an applicant’s proposed financing and funding sources; plans for high-quality AI training, resource efficiency, labor standards, and commercialization of IP developed at the site; safety and security measures and capabilities; AI workforce capabilities; and prior experience with comparable construction projects.
This is the first blog in a series covering the Fiscal Year 2025 National Defense Authorization Act (“FY 2025 NDAA”). This first blog will cover: (1) NDAA sections affecting acquisition policy and contract administration that may be of greatest interest to government contractors; (2) initiatives that underscore Congress’s commitment to strengthening cybersecurity, both domestically and internationally; and (3) NDAA provisions that aim to accelerate the Department of Defense’s adoption of AI and Autonomous Systems and counter efforts by U.S. adversaries to subvert them.
Future posts in this series will address NDAA provisions targeting China, supply chain and stockpile security, the revitalized Administrative False Claims Act, and Congress’s effort to mature the Office of Strategic Capital and leverage private investment to accelerate the development of critical technologies and strengthen the defense industrial base.
FY 2025 NDAA Overview
On December 23, 2025, President Biden signed the FY 2025 NDAA into law. The FY 2025 NDAA authorizes $895.2 billion in funding for the Department of Defense (“DoD”) and Department of Energy national security programs—a $9 billion or 1 percent increase over 2024. NDAA authorizations have traditionally served as a reliable indicator of congressional sentiment on final defense appropriations.
FY 2025 marks the 64th consecutive year in which an NDAA has been enacted, reflecting its status as “must-pass” legislation. As in prior years, the NDAA has been used as a legislative vehicle to incorporate other measures, including the FY 2025 Department of State and Intelligence Authorization Acts, as well as provisions related to the Departments of Justice, Homeland Security, and Veterans Affairs, among others.
Below are select provisions of interest to companies across industries that engage in U.S. Government contracting, including defense contractors, technology providers, life sciences firms, and commercial-item suppliers.
The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level.
Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI. Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue. And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.
In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation.
At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking.
This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025.
The White House
As stated in the Republican Party’s 2024 platform and by the president-elect on the campaign trail, the incoming Trump Administration plans to revoke President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“2023 AI EO”). The incoming administration also is expected to halt ongoing agency rulemakings related to AI, including a Department of Commerce rulemaking to implement the 2023 AI EO’s dual-use foundation model reporting and red-team testing requirements. President-elect Trump’s intention to re-nominate Russell Vought as Director of the Office of Management and Budget (“OMB”) suggests that a light-touch approach to AI regulation may be taken across all federal agencies. As OMB Director in the prior Trump Administration, Vought issued a memo directing federal agencies to “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”
This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”). As noted below, some of these developments provide industry with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Legislative Developments
There continued to be strong bipartisan interest in passing federal legislation related to AI. While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.
Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks.
In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV). The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations.
In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July. Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.
In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ). The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.
In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended. Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements. The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
Senate Homeland Security and Governmental Affairs Committee: In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495). Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
National Defense Authorization Act for Fiscal Year 2025: In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”). The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA. The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems. The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI. The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.
