We Take Safety ‘Very Seriously’
From PYMNTS: 2024-05-19 20:22:30
OpenAI’s co-founder and chief scientist, Ilya Sutskever, and Jan Leike resigned, dissolving the “superalignment team” focused on AI safety. Leike criticized the lack of safety priority, especially regarding artificial general intelligence (AGI). OpenAI execs, Sam Altman and Greg Brockman, reassured awareness of AGI risks and emphasized safe system deployment. Bloomberg reported more team departures. New AI safety lead appointed.
The departure of OpenAI safety executives sparks concerns over AGI safety and system deployment. Executives acknowledge AGI risks and emphasize safe technology deployment. Bloomberg reports additional team departures. New AI safety lead appointed. Altman expresses support for international AI regulation on podcast.
OpenAI faces challenges amid safety team departures but appoints new safety lead. Executives reassure awareness of AI risks and emphasize safe deployment practices. Founder expresses support for international AI regulation.Discussion around AGI safety and potential global harm continues.
Read more at PYMNTS: We Take Safety ‘Very Seriously’