Global Pact on AI Safety Testing Launched at UK Summit

Summary:

Le 2 novembre 2023, des dirigeants mondiaux et des organisations majeures d’IA ont conclu le premier Sommet sur la sécurité de l’IA, s’accordant sur un plan de tests de sécurité des modèles avancés d’IA et établissant un pôle de sécurité de l’IA basé au Royaume-Uni. L’initiative vise à améliorer la collaboration entre les gouvernements et les entreprises d’IA pour garantir que les modèles d’IA soient sûrs et bénéfiques pour la société, en se concentrant sur les tests avant et après le déploiement et en abordant les risques pour la sécurité nationale et la société. Les principaux enseignements incluent l’Institut de sécurité de l’IA du Royaume-Uni qui dirige la recherche et les évaluations de sécurité, Yoshua Bengio à la tête du rapport ‘État de la science’ sur les risques liés à l’IA, et des projets pour développer des normes internationales partagées pour la gouvernance de l’IA. Des pays comme la France et la Corée du Sud accueilleront également des événements de suivi pour maintenir l’élan. Les développements à venir incluent un sommet virtuel organisé par la Corée du Sud dans les six mois, et un sommet en personne en France dans un an pour faire progresser les efforts mondiaux en matière de sécurité de l’IA.

Original Link:

Link

Generated Article:

World leaders and leading AI companies have reached a landmark agreement to prioritize the safety evaluation of frontier AI systems, marking a significant step toward global AI governance. The first-ever AI Safety Summit, held on November 2, 2023, at Bletchley Park, UK, saw nations and corporations laying the groundwork for robust safety testing. The collaboration emphasizes the shared responsibility between governments and AI developers in addressing the challenges posed by advanced AI models.

Central to the summit’s outcomes is the creation of a UK-based global hub to focus on AI safety testing. The initiative aims to assess AI models rigorously both prior to and after deployment, focusing on potential risks to national security, societal harms, and safety. Significantly, this signals a departure from the traditional structure where developers had the primary responsibility for ensuring the safety of their systems. Governments will now play a direct role in oversight and resource development, bolstering public sector testing capabilities. For instance, the UK’s newly established AI Safety Institute will lead this charge in partnership with international organizations.

One of the flagship efforts following the summit is a ‘State of the Science’ report to be spearheaded by Professor Yoshua Bengio, a globally renowned AI researcher and 2018 Turing Award winner. Engaging experts worldwide, this report will provide a comprehensive scientific assessment of the current risks and capacities of frontier AI models. Bengio highlighted the need for a balanced investment in both technological advancements and safety measures, underlining that protecting the public and fostering governance should be global imperatives.

From a legal perspective, this initiative aligns with ongoing regulatory trends such as the European Union’s proposed AI Act, which advocates for risk-based categorization and mandatory pre-market requirements for high-risk AI systems. The summit also amplifies existing ethical discussions. As suggested by figures like the European Commission President Ursula von der Leyen, the pact emphasizes the need for operational ‘guardrails,’ independent testing standards, and enforcement protocols to mitigate risks such as misinformation, algorithmic bias, and the misuse of generative AI.

The industry’s response to these agreements has been broadly positive. Leaders like Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic have voiced support for independent evaluations and collaborative safety research. Private sector involvement is crucial given the rapid pace of AI innovations, with companies looking at safety as both a technical and moral responsibility.

On an international level, the summits mark the beginning of long-term cooperation. Initiatives like the AI partnerships between Singapore and the UK, as well as the U.S.’s pledge to align efforts through its own AI Safety Institute, underscore this commitment to global collaboration.

While much work remains, including setting universally accepted standards, accountability frameworks, and cross-border data-handling agreements, the agreements reached at the AI Safety Summit provide a promising start. The shared ambition laid out paves the way for future summits, including a virtual meeting hosted by South Korea and a follow-up in-person summit in France within the next year. By fostering collective action and independent scrutiny, the Bletchley Park agreements represent hope for a safer, more responsible trajectory for AI development.

Click to rate this post!
[Total: 0 Average: 0]

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply