Summary:
La Californie s’apprête à mettre en œuvre de nouvelles réglementations sur l’utilisation de l’intelligence artificielle (IA) par les employeurs, visant à prévenir la discrimination dans les décisions d’embauche. Ces règlements stipulent que l’utilisation de systèmes de décision automatisés ne doit pas enfreindre les lois anti-discrimination de l’État, notamment en ce qui concerne les antécédents criminels et les évaluations médicales. Cela fait de la Californie l’une des premières juridictions à instaurer un cadre juridique complet pour encadrer l’usage de l’IA dans le domaine de l’emploi.
Original Link:
Original Article:
California’s Wait Is Nearly Over: New AI Employment Discrimination Regulations Move Toward Final Publication
The California Civil Rights Council has advanced new regulations regarding employers’ use of artificial intelligence (AI) and automated decision-making systems, clearing the way for them to take effect later this year. The new regulations will make the state one of the first to adopt comprehensive regulations regarding the growing use of such technologies to make employment decisions.
The California Civil Rights Department finalized modified regulations for employers’ use of AI and automated decision-making systems. The regulations confirm that the use of such technology to make employment decisions may violate the state’s anti-discrimination laws and clarify limits on such technology, including in conducting criminal background checks and medical/psychological inquiries.
On March 21, 2025, the Civil Rights Council voted to approve the final and modified text of California’s new “Employment Regulations Regarding Automated-Decision Systems.” The regulations were filed with the Office of Administrative Law, which must approve them. At this time, it is not clear when the finalized modifications will take effect, although they are likely to become effective this year.
The CRD has been considering automated-decision system regulations for years amid concerns over employers’ increasing use of AI and other automated decision-making systems, or “Automated-Decision Systems,” to make or facilitate employment decisions, such as recruitment, hiring, and promotions.
While the final regulations have some key differences from the proposed regulations released in May 2024, they clarify that it is unlawful to use AI and automated decision-making systems to make employment decisions that discriminate against applicants or employees in a way prohibited by the California Fair Employment and Housing Act (FEHA) or other California antidiscrimination laws.
Here are some key aspects of the final regulations:
The final regulations define “automated-decision system[s]” as “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” This definition is narrower than the proposed regulations. Covered systems include a range of technological processes, including tests, games, or puzzles used to assess applicants or employees, processes for targeting job advertisements, screening resumes, processes to analyze “facial expression, word choice, and/or voice in online interviews,” or processes to “analyze employee or applicant data acquired from third parties.”
Notably, the final regulations do not include language from the proposed rule’s excluded technology provision that would have excluded systems used to facilitate human decision making regarding an employment benefit.
Potentially discriminatory hiring tools have long been unlawful in California, but the final regulations confirm that antidiscrimination laws apply to potential discrimination on the basis of protected class or disability that is carried out by AI or automated decision-making systems. Specifically, the regulations state that it is “unlawful for an employer or other covered entity to use an automated-decision system or selection criteria that discriminates against an applicant or employee or a class of applicants or employees on a basis protected” by FEHA.
However, the final regulations do not include the proposed definition of “adverse impact” caused by an automated-decision system. The prior proposed regulations had specified that an adverse impact includes “disparate impact” theories and may be the result of a “facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by” FEHA.
The final regulations further clarify that the use of online application technology that “screens out, ranks, or prioritizes applicants based on” scheduling restrictions “may discriminate against applicants based on their religious creed, disability, or medical condition,” unless it is job-related and required by business necessity and there is a mechanism for the applicant to request an accommodation. The regulations also state the use of such a system “that measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act” without reasonable accommodation may result in unlawful discrimination.
California law provides that before employers deny applicants based on a criminal record, the employer “must first make an individualized assessment of whether the applicant’s conviction history has a direct and adverse relationship with the specific duties of the job” that would justify denying the applicant. The final regulations state that “prohibited consideration” of criminal records “includes, but is not limited to, inquiring about criminal history through an employment application, background check, or the use of an automated-decision system.”
The final regulations state that rules against asking job applicants about their medical or psychological histories include “through the use of an automated-decision system.” The regulations state that such an inquiry “includes any such examination or inquiry administered through the use of an automated-decision system.”
The final regulations clarify that the prohibitions on aiding and abetting unlawful employment practices apply to the use of automated decision-making systems, potentially implicating third parties that design or implement such systems. The final regulations will make California one of the first jurisdictions to promulgate comprehensive regulations concerning AI and/or automated decision-making technologies.