Summary:

La législation sur l’IA aborde la question de la responsabilité utilisateur dans les transactions automatisées par des outils d’IA agentique. La formation de contrats par des ‘agents électroniques’ soulève des interrogations sur l’applicabilité des lois existantes, notamment le Uniform Electronic Transactions Act (UETA). Cette législation doit s’adapter à l’évolution technologique pour garantir la validité et l’applicabilité des contrats générés par ces outils, tout en considérant la responsabilité potentielle des utilisateurs face aux actions d’une IA.

Original Link:

Link

Original Article:

Contract Law in the Age of Agentic AI: Who’s Really Clicking “Accept”?

In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.

**Background: Snapshot of the Current State of “Agents”**

“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:

– **Anthropic announced** an updated frontier AI model in public beta capable of interacting with and using computers like human users;
– **Google unveiled** Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
– **OpenAI launched** a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf;
– **LexisNexis announced** the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
– **Perplexity recently rolled out** “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature;
– **Amazon announced** Alexa+, a new generation of Alexa that has agentic capabilities.

Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas. Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.

Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.

**Automated Transactions and Electronic Agents**

A foundational legal question is whether transactions initiated and executed by an AI tool on behalf of a user are enforceable. Despite the newness of agentic AI, the legal underpinnings of electronic transactions are well-established. The Uniform Electronic Transactions Act (UETA), which has been adopted by every state and the District of Columbia (except New York), the federal E-SIGN Act, and the Uniform Commercial Code (UCC), serve as the legal framework for the use of electronic signatures and records, ensuring their validity and enforceability in interstate commerce.

UETA is technology-neutral and “applies only to transactions between parties each of which has agreed to conduct transactions by electronic means.” In the typical e-commerce transaction, a human user selects products or services for purchase and proceeds to checkout, culminating in the user clicking “I Agree” or “Purchase.” This click may be effective as an electronic signature, affirming the user’s agreement to the transaction and any accompanying terms, assuming the requisite contractual principles of notice and assent have been met.

However, while UETA has been adopted by 49 states and the District of Columbia, it has not been enacted in New York. Instead, New York has its own electronic signature law, the Electronic Signature Records Act (ESRA). Given US states’ wide adoption of UETA model statute, this post will primarily rely on its provisions in analyzing certain contractual questions with respect to AI agents.

**Electronic “Agents” under the Law**

Under UETA, a contract may be formed by the interaction of “electronic agents” of the parties or by an “electronic agent” and an individual. E-SIGN similarly contemplates “electronic agents,” and states that a contract may not be denied legal effect, validity, or enforceability solely because its formation involved one or more electronic agents. Under these definitions, agentic AI tools might qualify as “electronic agents” and thus can form enforceable contracts under existing law.

**AI Tools and E-Commerce Transactions**

Given this existing body of statutory law enabling electronic signatures, this may be the end of the analysis for most e-commerce transactions. If I tell an AI tool to buy me a certain product and it does so, then the vendor and I might assume that we have formed a binding agreement. But what if the transaction does not go as planned for reasons related to the AI tool?

Disputes like these begin with a conflict between the user and a vendor—the AI tool may have been effective to create a contract between the user and the vendor, and the user may then have legal responsibility for that contract. But the user may seek indemnity against the developer of the AI tool. Most developers will try to avoid these situations by requiring user approvals before purchases are finalized. But as AI tools become more autonomous, these protections could start to weaken, leading users to approve transactions without careful vetting, and finding themselves responsible for unintended liabilities related to transactions completed by an agentic AI tool.

**Sources of Law Governing AI Transactions**

As stated in UETA’s Prefatory Note, the purpose of UETA is “to remove barriers to electronic commerce by validating and effectuating electronic records and signatures.” Yet, UETA does not provide contract law rules on how to form an agreement or the enforceability of the terms of any agreement that has been formed. Thus, in the event of a dispute, terms of service governing agentic AI tools will likely be the primary source to which courts will look to allocate liability.

In summation, the terms of the transaction and general contract law principles and protections play a vital role. However, not all roads lead to contract law. In the next installment, we will explore agency law questions, as established law may now be challenged by agentic AI.

Click to rate this post!
[Total: 0 Average: 0]

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply