This Week in AI: OpenAI moves away from safety

From TechCrunch: 2024-05-18 09:31:00

This week, OpenAI made waves with the launch of GPT-4o, their most powerful generative model yet, and the disbanding of a team focused on AI safety. Concerns over prioritizing product over safeguards at OpenAI led to high-profile resignations. Other AI news included agreements between OpenAI and Reddit, Google’s I/O conference, and Anthropic hiring Instagram co-founder Mike Krieger.

AI safety remains a top concern with OpenAI’s recent departures and Google Deepmind’s new Frontier Safety Framework, aiming to identify and prevent harmful capabilities in AI models. Cambridge researchers are warning of the ethical risks of training chatbots on deceased individuals’ data. MIT physicists are using AI to predict physical systems’ phases more efficiently, and CU Boulder is exploring AI applications in disaster management for improved response efforts.

Disney Research has developed a method to diversify the output of diffusion image generation models by adding noise to conditioning signals. This results in a wider range of image outputs, enhancing variety and creativity in generated images.



Read more at TechCrunch: This Week in AI: OpenAI moves away from safety