Summary:
À partir du 1er mai 2025, les réglementations de la section 1557 exigent que les fournisseurs de soins de santé identifient et atténuent les risques de discrimination liés à l’utilisation de l’IA et d’autres technologies d’urgence dans les soins aux patients. Cela inclut l’évaluation des outils d’IA pour les biais potentiels, la personnalisation des outils et la gestion des plaintes concernant la discrimination. Des mesures spécifiques doivent être prises pour éduquer le personnel et minimiser les biais.
Original Link:
Original Article:
Effective May 1, 2025, the Section 1557 regulations require covered healthcare providers to take reasonable steps by May 1, 2025, to identify and mitigate the risk of discrimination when they use AI and other emergency technologies in patient care that use race, color, national origin, sex, age or disability as input variables. Whether a provider has taken reasonable efforts to mitigate discrimination risks will depend on a variety of factors, including the provider’s size and resources; how the provider is utilizing the tool; whether the provider customized the tool; and the processes the provider has in place to evaluate the tool for potential discrimination. These requirements do not apply to AI tools utilized outside of patient care, such as in billing or scheduling.
Providers utilizing AI tools to support patient care decisions should have a process in place to evaluate AI tools for potential discrimination both prior to purchase and then ongoing thereafter. Such evaluation should identify whether the AI tool uses race, color, national origin, sex, age or disability as input variables and if so, what if any, information is publicly available on the potential for bias or discrimination. Providers should also reach out to the product’s developer and/or the entity through whom the provider purchased the tool for additional information.
If an AI tool has the potential for bias or discrimination, the provider should consider ways in which that potential needs to be addressed in how its staff utilizes the tool, including educating end users of the potential for bias and discrimination and any recommendations or best practices developed to minimize the potential for bias and discrimination. Provider policies should also address how patients, providers, and others can submit a complaint regarding bias and discrimination in the use of an AI tool and how such complaints will be handled.
On our podcast this week, we discuss steps small providers can take to address potential discrimination in patient care decision support tools and meet the rule’s requirements. We also updated our Sample Generative AI policy to address the rule’s requirements for patient care decision support tools.