OpenAI, Google among firms committing to AI safety framework

From UKTN: 2024-05-21 05:00:01

Over a dozen global tech companies, including Amazon and Microsoft, have committed to AI safety frameworks to prevent harm. In extreme cases, they will not deploy risky models. The voluntary commitment was made at the AI Seoul Summit, aiming to set global standards for safe AI development.

Firms will publish safety frameworks to assess AI risks, including intolerable dangers. UK Prime Minister Rishi Sunak hails the commitments for transparency and accountability. The Bletchley Declaration, involving 27 nations, also focuses on AI harm prevention and collaboration among AI companies globally.

The 16 participating firms, such as Google DeepMind and IBM, aim to manage AI risks for transformative economic growth. A previous agreement from the Bletchley Park summit outlines safety testing by like-minded countries and AI companies before model release, a move aimed at ensuring safe AI development and deployment. Google DeepMind was reported as the only major AI lab allowing safety tests by the UK’s AI Safety Institute.



Read more at UKTN: OpenAI, Google among firms committing to AI safety framework