Summary:
The Sixth Annual Technology Forum highlighted global regulatory trends in AI, privacy, and tech investment, focusing on U.S.-China relations and European competitiveness. State legislation on AI and consumer protection is growing, with significant developments expected in 2025.
Original Link:
Original Article:
On January 29 – 31, 2025, Covington convened authorities from across our practice groups for the Sixth Annual Technology Forum, which explored recent global developments affecting businesses that develop, deploy, and use cutting-edge technologies. Seventeen Covington attorneys discussed global regulatory trends and forecasts relevant to these industries, highlights of which are captured below.
Day 1: What’s Happening Now in the U.S. & Europe
Early Days of the New U.S. Administration
Covington attorney Holly Fechner and Covington public policy authority Bill Wichterman addressed how the incoming administration has signaled a shift in technology policy, with heightened scrutiny on Big Tech, AI, cryptocurrency, and privacy regulations. A new Executive Order on AI aims to remove barriers to American leadership in AI, while trade controls and outbound investment restrictions seek to strengthen national security in technology-related transactions. Meanwhile, the administration’s approach to decoupling from China is evolving, with stricter protectionist measures replacing prior subsidy-based initiatives.
Cross-Border Investment
Covington attorney Jonathan Wakely discussed the role of ongoing geopolitical tensions in shaping cross-border investment policies, particularly in technology-related transactions. He noted that the Committee on Foreign Investment in the United States (CFIUS) remains aggressive in reviewing deals that could pose China-related risks. The new Outbound Investment Rule introduces restrictions on U.S. persons investing in Chinese companies engaged in certain AI, quantum computing, and semiconductor activities.
Updates on European Tech Regulation
Covington attorneys Sam Choi and Bart Szewczyk explained how, in light of the Draghi Report on European competitiveness and growing geopolitical pressures, the European Commission is planning to focus on “European competitiveness” in this term. The European Commission has announced plans to increase investments into its tech sectors, and find ways to ease the regulatory burden on companies. It is expected that the EU will focus on implementing, and potentially streamlining, its existing tech regulatory regime – rather than adopting new tech regulations that will impose added obligations on companies. The EU already has in place a robust regulatory regime covering privacy, cybersecurity, competition, data sharing, online platforms, and AI. In 2025, the recently adopted AI Act and the Data Act will start to apply, so companies should prepare for their implementation.
On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law. SB 53 proposes a significantly narrower approach compared to
Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges
State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025. As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation. Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.
Consumer Protection. Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act. In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general. They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system. For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
Sector-Specific Automated Decision-making. Lawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance. For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance. Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General. Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT. For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.
On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI. This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”). Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis. This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.
Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:
Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
Transparency reporting on advanced AI systems – includes questions regarding, among others, reports and technical documentation and transparency practices.
Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.
Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips. Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January. DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.
The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials. These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers. What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security.
Federal Government-Wide Responses
Bi-Partisan Bill to Ban DeepSeek from Government Devices: On February 7, Representatives Gottheimer (D-NJ-5) and LaHood (R-IL-16) introduced the No DeepSeek on Government Devices Act (HR 1121). Reps. Gottheimer and LaHood, who both serve on the House Permanent Select Committee on Intelligence, each issued public statements pointing to grave and deeply held national security concerns regarding DeepSeek. Rep. Gottheimer has stated that “we have deeply disturbing evidence that [the Chinese Communist Party (“CCP”) is] using DeepSeek to steal the sensitive data of U.S. citizens,” calling DeepSeek “a five-alarm national security fire.” Representative LaHood stated that “[u]nder no circumstances can we allow a CCP company to obtain sensitive government or personal data.”
While the details of the bill have not yet been unveiled, any future DeepSeek prohibition could be extended by the FAR Council to all federal contractors and may not exempt commercial item contracts under FAR Part 12 or contracts below the simplified acquisition (or even the micro-purchase) threshold, similar to other bans in this sector. Notably, such a prohibition may leave contractors with questions about the expected scope of implementation, including the particular devices that are covered.
Executive Summary
Artificial intelligence (AI), social media, and instant messaging regulation will be a hot topic in Brazil in 2025, with substantial activity in Congress and the Supreme Court.
Cloud, cybersecurity, data centers, and data privacy are topics that could also see legislative or regulatory action throughout the year at different policymaking stages.
Technology companies will also be affected by horizontal and sector-specific tax policy-related measures, and Brazil’s digital policy might be impacted by U.S.-Brazil relations under the new Trump administration.
Analysis
2025 is shaping up to be a key year for digital policymaking in Brazil. It is the last year for President Luiz Inácio Lula da Silva’s administration to pursue substantial policy change before the 2026 general elections. It is also the first year for the new congressional leadership, in particular the new Speaker of the House and President of the Senate, to put their stamp on key legislation before their own reelection campaigns next year.
Existing Legal Framework: LGT, MCI and LGPD
Brazil’s current approach to digital policy is based on three key federal statutes. The first one is the General Telecommunications Act of 1997 (“LGT”). LGT established the rules for the country’s transition from a state-owned monopoly to a competitive, private sector-led telecommunications market. It is the bedrock of Brazil’s digital economy infrastructure regulation as, among other aspects, it sets rules for radio spectrum and orbit uses.
The second key statute is the Civil Rights Framework for the Internet Act of 2014 (“MCI”). MCI sets the principles, rights and obligations for internet use, including the net neutrality principle and a safe harbor clause protecting internet service providers from liability for user-generated content absent a court order to remove the content. The statute also established the first layer of data privacy provisions as well as rules for the federal, state, and local governments internet-related policies and actions.
The third key federal statute is the General Personal Data Protection Act of 2018 (“LGPD”). LGPD sets rules for the treatment of personal data by individuals, companies, state-owned and state-supported enterprises, and governments. It slightly amends MCI and adds a more robust layer of data privacy protection.
Each statute has its own regulator, respectively the National Telecommunications Agency (“ANATEL”), Brazil’s Internet Management Committee (“CGI.br”), and the National Data Protection Authority (“ANPD”).
Hot Topics in 2025: AI, Social Media, and Instant Messaging
Two agenda items will likely dominate the policy debate in Brazil in 2025. The first one is the creation of a new legal framework for AI. After years of intense debate, the Senate approved its AI bill in December 2024. The bill sets rights and obligations for developers, deployers, and distributors of AI systems, and takes a human rights, risk management, and transparency approach to regulating AI-related activity. It also contains contentious provisions establishing AI-related copyright obligations. In 2025, the House will likely debate and try to approve the bill, which is also a priority for the Lula administration.
This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration. This blog describes key actions on AI taken by the Trump Administration in January 2025.
Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure
Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration. On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.” This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land. Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI.
On January 14, and in tandem with the release of EO 14141, the Office of Management and Budget (“OMB”) issued Memorandum M-25-03 on “Implementation Guidance for the Federal Data Center Enhancement Act,” directing federal agencies to implement requirements related to the operation of data centers by federal agencies or government contractors. Specifically, the memorandum requires federal agencies to regularly monitor and optimize data center electrical consumption, including through the use of automated tools, and to arrange for assessments by certified specialists of data center energy and water usage and efficiency, among other requirements. Like EO 14141, Memorandum M-25-03 has yet to be rescinded by the Trump Administration.
Trump White House Revokes President Biden’s 2023 AI Executive Order
On January 20, President Trump issued Executive Order 14148 on “Initial Recissions of Harmful Executive Orders and Actions,” revoking dozens of Biden Administration executive actions, including the October 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” (“2023 AI EO”). To implement these revocations, Section 3 of EO 14148 directs the White House Domestic Policy Council (“DPC”) and National Economic Council (“NEC”) to “review all Federal Government actions” taken pursuant to the revoked executive orders and “take all necessary steps to rescind, replace, or amend such actions as appropriate.” EO 14148 further directs the DPC and NEC to submit, within 45 days of the EO, lists of additional Biden Administration orders, memoranda, and proclamations that should be rescinded and “replacement orders, memoranda, or proclamations” to “increase American prosperity.” Finally, EO 14148 directs National Security Advisor Michael Waltz to initiate a “complete and thorough review” of all National Security Memoranda (“NSMs”) issued by the Biden Administration and recommend NSMs for recission within 45 days of the EO.
On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.” The RFI marks a first step toward the implementation of the Trump Administration’s January
On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence. The new legislation comes just days after Chinese AI company DeepSeek
U.S. Secretary of Commerce nominee Howard Lutnick delivered a detailed preview of what to expect from the Trump Administration on key issues around technology, trade, and intellectual property. At his nomination hearing before the Senate Committee on Commerce, Science, and Transportation on Wednesday, January 29, Lutnick faced questions from senators about the future of the CHIPS and Science Act, global trade, and particularly U.S. technological competition with China, including export controls and artificial intelligence after the release of China’s AI model “DeepSeek.” Lutnick, who was introduced by Vice President J.D. Vance, committed to implementing the Trump Administration’s America First agenda.
If confirmed, Lutnick will lead the Commerce Department’s vast policy portfolio, including export controls for emerging technologies, broadband spectrum access and deployment, AI innovation, and climate and weather issues through the National Oceanic and Atmospheric Administration (“NOAA”). In his responses to senators’ questions, Lutnick emphasized his pro-business approach and his intent to implement President Trump’s policy objectives including bringing manufacturing—particularly of semiconductors—back to the United States and establishing “reciprocity” with China in response to what he called “unfair” treatment of U.S. businesses.
Technology Competition with China, Export Controls, and Intellectual Property
Senators on both sides of the aisle asked Lutnick about the threat of Chinese competition in emerging technologies, such as AI. Lutnick stated that it is evident the Chinese used “stolen” and “leveraged” U.S. technologies to develop DeepSeek and that the United States needs to stop China from “using our tools to compete with us.”
Lutnick noted that China has found ways to evade U.S. export controls and that, under his direction, the Commerce Department will reinforce these controls with punitive tariffs to ensure compliance. Lutnick also criticized the Chinese for refusing to respect U.S. innovators’ IP in China, stating that the Chinese should expect the same treatment in the United States under a new policy of “reciprocity.” As Commerce Secretary, Lutnick will oversee the Bureau of Industry and Security (“BIS”) and the U.S. Patent and Trademark Office (“USPTO”), which he noted will carry out the Trump Administration’s America First agenda, including by preventing the Chinese from “abusing” the U.S. patent system. In response to questioning from Senator Marsha Blackburn (R-TN), Lutnick also stated that he would work to reduce the backlog of patent applications pending at the USPTO.
